title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
sequencelengths
1
5.62k
url
stringlengths
79
342
Generating Compliance Service Reports
Generating Compliance Service Reports Red Hat Insights 1-latest Communicate the compliance status of your RHEL infrastructure to security stakeholders Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/generating_compliance_service_reports/index
Chapter 7. GFS2 file systems in a cluster
Chapter 7. GFS2 file systems in a cluster Use the following administrative procedures to configure GFS2 file systems in a Red Hat high availability cluster. 7.1. Configuring a GFS2 file system in a cluster You can set up a Pacemaker cluster that includes GFS2 file systems with the following procedure. In this example, you create three GFS2 file systems on three logical volumes in a two-node cluster. Prerequisites Install and start the cluster software on both cluster nodes and create a basic two-node cluster. Configure fencing for the cluster. For information about creating a Pacemaker cluster and configuring fencing for the cluster, see Creating a Red Hat High-Availability cluster with Pacemaker . Procedure On both nodes in the cluster, enable the repository for Resilient Storage that corresponds to your system architecture. For example, to enable the Resilient Storage repository for an x86_64 system, you can enter the following subscription-manager command: Note that the Resilient Storage repository is a superset of the High Availability repository. If you enable the Resilient Storage repository you do not also need to enable the High Availability repository. On both nodes of the cluster, install the lvm2-lockd , gfs2-utils , and dlm packages. To support these packages, you must be subscribed to the AppStream channel and the Resilient Storage channel. On both nodes of the cluster, set the use_lvmlockd configuration option in the /etc/lvm/lvm.conf file to use_lvmlockd=1 . Set the global Pacemaker parameter no-quorum-policy to freeze . Note By default, the value of no-quorum-policy is set to stop , indicating that once quorum is lost, all the resources on the remaining partition will immediately be stopped. Typically this default is the safest and most optimal option, but unlike most resources, GFS2 requires quorum to function. When quorum is lost both the applications using the GFS2 mounts and the GFS2 mount itself cannot be correctly stopped. Any attempts to stop these resources without quorum will fail which will ultimately result in the entire cluster being fenced every time quorum is lost. To address this situation, set no-quorum-policy to freeze when GFS2 is in use. This means that when quorum is lost, the remaining partition will do nothing until quorum is regained. Set up a dlm resource. This is a required dependency for configuring a GFS2 file system in a cluster. This example creates the dlm resource as part of a resource group named locking . Clone the locking resource group so that the resource group can be active on both nodes of the cluster. Set up an lvmlockd resource as part of the locking resource group. Check the status of the cluster to ensure that the locking resource group has started on both nodes of the cluster. On one node of the cluster, create two shared volume groups. One volume group will contain two GFS2 file systems, and the other volume group will contain one GFS2 file system. Note If your LVM volume group contains one or more physical volumes that reside on remote block storage, such as an iSCSI target, Red Hat recommends that you ensure that the service starts before Pacemaker starts. For information about configuring startup order for a remote physical volume used by a Pacemaker cluster, see Configuring startup order for resource dependencies not managed by Pacemaker . The following command creates the shared volume group shared_vg1 on /dev/vdb . The following command creates the shared volume group shared_vg2 on /dev/vdc . On the second node in the cluster: If the use of a devices file is enabled with the use_devicesfile = 1 parameter in the lvm.conf file, add the shared devices to the devices file This feature is enabled by default. Start the lock manager for each of the shared volume groups. On one node in the cluster, create the shared logical volumes and format the volumes with a GFS2 file system. One journal is required for each node that mounts the file system. Ensure that you create enough journals for each of the nodes in your cluster. The format of the lock table name is ClusterName:FSName where ClusterName is the name of the cluster for which the GFS2 file system is being created and FSName is the file system name, which must be unique for all lock_dlm file systems over the cluster. Create an LVM-activate resource for each logical volume to automatically activate that logical volume on all nodes. Create an LVM-activate resource named sharedlv1 for the logical volume shared_lv1 in volume group shared_vg1 . This command also creates the resource group shared_vg1 that includes the resource. In this example, the resource group has the same name as the shared volume group that includes the logical volume. Create an LVM-activate resource named sharedlv2 for the logical volume shared_lv2 in volume group shared_vg1 . This resource will also be part of the resource group shared_vg1 . Create an LVM-activate resource named sharedlv3 for the logical volume shared_lv1 in volume group shared_vg2 . This command also creates the resource group shared_vg2 that includes the resource. Clone the two new resource groups. Configure ordering constraints to ensure that the locking resource group that includes the dlm and lvmlockd resources starts first. Configure colocation constraints to ensure that the vg1 and vg2 resource groups start on the same node as the locking resource group. On both nodes in the cluster, verify that the logical volumes are active. There may be a delay of a few seconds. Create a file system resource to automatically mount each GFS2 file system on all nodes. You should not add the file system to the /etc/fstab file because it will be managed as a Pacemaker cluster resource. Mount options can be specified as part of the resource configuration with options= options . Run the pcs resource describe Filesystem command to display the full configuration options. The following commands create the file system resources. These commands add each resource to the resource group that includes the logical volume resource for that file system. Verification Verify that the GFS2 file systems are mounted on both nodes of the cluster. Check the status of the cluster. Additional resources Configuring GFS2 file systems Configuring a Red Hat High Availability cluster on Microsoft Azure Configuring a Red Hat High Availability cluster on AWS Configuring a Red Hat High Availability Cluster on Google Cloud Platform 7.2. Configuring an encrypted GFS2 file system in a cluster You can create a Pacemaker cluster that includes a LUKS encrypted GFS2 file system with the following procedure. In this example, you create one GFS2 file systems on a logical volume and encrypt the file system. Encrypted GFS2 file systems are supported using the crypt resource agent, which provides support for LUKS encryption. There are three parts to this procedure: Configuring a shared logical volume in a Pacemaker cluster Encrypting the logical volume and creating a crypt resource Formatting the encrypted logical volume with a GFS2 file system and creating a file system resource for the cluster 7.2.1. Configure a shared logical volume in a Pacemaker cluster Prerequisites Install and start the cluster software on two cluster nodes and create a basic two-node cluster. Configure fencing for the cluster. For information about creating a Pacemaker cluster and configuring fencing for the cluster, see Creating a Red Hat High-Availability cluster with Pacemaker . Procedure On both nodes in the cluster, enable the repository for Resilient Storage that corresponds to your system architecture. For example, to enable the Resilient Storage repository for an x86_64 system, you can enter the following subscription-manager command: Note that the Resilient Storage repository is a superset of the High Availability repository. If you enable the Resilient Storage repository you do not also need to enable the High Availability repository. On both nodes of the cluster, install the lvm2-lockd , gfs2-utils , and dlm packages. To support these packages, you must be subscribed to the AppStream channel and the Resilient Storage channel. On both nodes of the cluster, set the use_lvmlockd configuration option in the /etc/lvm/lvm.conf file to use_lvmlockd=1 . Set the global Pacemaker parameter no-quorum-policy to freeze . Note By default, the value of no-quorum-policy is set to stop , indicating that when quorum is lost, all the resources on the remaining partition will immediately be stopped. Typically this default is the safest and most optimal option, but unlike most resources, GFS2 requires quorum to function. When quorum is lost both the applications using the GFS2 mounts and the GFS2 mount itself cannot be correctly stopped. Any attempts to stop these resources without quorum will fail which will ultimately result in the entire cluster being fenced every time quorum is lost. To address this situation, set no-quorum-policy to freeze when GFS2 is in use. This means that when quorum is lost, the remaining partition will do nothing until quorum is regained. Set up a dlm resource. This is a required dependency for configuring a GFS2 file system in a cluster. This example creates the dlm resource as part of a resource group named locking . Clone the locking resource group so that the resource group can be active on both nodes of the cluster. Set up an lvmlockd resource as part of the group locking . Check the status of the cluster to ensure that the locking resource group has started on both nodes of the cluster. On one node of the cluster, create a shared volume group. Note If your LVM volume group contains one or more physical volumes that reside on remote block storage, such as an iSCSI target, Red Hat recommends that you ensure that the service starts before Pacemaker starts. For information about configuring startup order for a remote physical volume used by a Pacemaker cluster, see Configuring startup order for resource dependencies not managed by Pacemaker . The following command creates the shared volume group shared_vg1 on /dev/sda1 . On the second node in the cluster: If the use of a devices file is enabled with the use_devicesfile = 1 parameter in the lvm.conf file, add the shared device to the devices file on the second node in the cluster. This feature is enabled by default. Start the lock manager for the shared volume group. On one node in the cluster, create the shared logical volume. Create an LVM-activate resource for the logical volume to automatically activate the logical volume on all nodes. The following command creates an LVM-activate resource named sharedlv1 for the logical volume shared_lv1 in volume group shared_vg1 . This command also creates the resource group shared_vg1 that includes the resource. In this example, the resource group has the same name as the shared volume group that includes the logical volume. Clone the new resource group. Configure an ordering constraints to ensure that the locking resource group that includes the dlm and lvmlockd resources starts first. Configure a colocation constraints to ensure that the vg1 and vg2 resource groups start on the same node as the locking resource group. Verification On both nodes in the cluster, verify that the logical volume is active. There may be a delay of a few seconds. 7.2.2. Encrypt the logical volume and create a crypt resource Prerequisites You have configured a shared logical volume in a Pacemaker cluster. Procedure On one node in the cluster, create a new file that will contain the crypt key and set the permissions on the file so that it is readable only by root. Create the crypt key. Distribute the crypt keyfile to the other nodes in the cluster, using the -p parameter to preserve the permissions you set. Create the encrypted device on the LVM volume where you will configure the encrypted GFS2 file system. Create the crypt resource as part of the shared_vg1 volume group. Verification Ensure that the crypt resource has created the crypt device, which in this example is /dev/mapper/luks_lv1 . 7.2.3. Format the encrypted logical volume with a GFS2 file system and create a file system resource for the cluster Prerequisites You have encrypted the logical volume and created a crypt resource. Procedure On one node in the cluster, format the volume with a GFS2 file system. One journal is required for each node that mounts the file system. Ensure that you create enough journals for each of the nodes in your cluster. The format of the lock table name is ClusterName:FSName where ClusterName is the name of the cluster for which the GFS2 file system is being created and FSName is the file system name, which must be unique for all lock_dlm file systems over the cluster. Create a file system resource to automatically mount the GFS2 file system on all nodes. Do not add the file system to the /etc/fstab file because it will be managed as a Pacemaker cluster resource. Mount options can be specified as part of the resource configuration with options= options . Run the pcs resource describe Filesystem command for full configuration options. The following command creates the file system resource. This command adds the resource to the resource group that includes the logical volume resource for that file system. Verification Verify that the GFS2 file system is mounted on both nodes of the cluster. Check the status of the cluster. Additional resources Configuring GFS2 file systems
[ "subscription-manager repos --enable=rhel-9-for-x86_64-resilientstorage-rpms", "dnf install lvm2-lockd gfs2-utils dlm", "use_lvmlockd = 1", "pcs property set no-quorum-policy=freeze", "pcs resource create dlm --group locking ocf:pacemaker:controld op monitor interval=30s on-fail=fence", "pcs resource clone locking interleave=true", "pcs resource create lvmlockd --group locking ocf:heartbeat:lvmlockd op monitor interval=30s on-fail=fence", "pcs status --full Cluster name: my_cluster [...] Online: [ z1.example.com (1) z2.example.com (2) ] Full list of resources: smoke-apc (stonith:fence_apc): Started z1.example.com Clone Set: locking-clone [locking] Resource Group: locking:0 dlm (ocf::pacemaker:controld): Started z1.example.com lvmlockd (ocf::heartbeat:lvmlockd): Started z1.example.com Resource Group: locking:1 dlm (ocf::pacemaker:controld): Started z2.example.com lvmlockd (ocf::heartbeat:lvmlockd): Started z2.example.com Started: [ z1.example.com z2.example.com ]", "vgcreate --shared shared_vg1 /dev/vdb Physical volume \"/dev/vdb\" successfully created. Volume group \"shared_vg1\" successfully created VG shared_vg1 starting dlm lockspace Starting locking. Waiting until locks are ready", "vgcreate --shared shared_vg2 /dev/vdc Physical volume \"/dev/vdc\" successfully created. Volume group \"shared_vg2\" successfully created VG shared_vg2 starting dlm lockspace Starting locking. Waiting until locks are ready", "lvmdevices --adddev /dev/vdb lvmdevices --adddev /dev/vdc", "vgchange --lockstart shared_vg1 VG shared_vg1 starting dlm lockspace Starting locking. Waiting until locks are ready vgchange --lockstart shared_vg2 VG shared_vg2 starting dlm lockspace Starting locking. Waiting until locks are ready", "lvcreate --activate sy -L5G -n shared_lv1 shared_vg1 Logical volume \"shared_lv1\" created. lvcreate --activate sy -L5G -n shared_lv2 shared_vg1 Logical volume \"shared_lv2\" created. lvcreate --activate sy -L5G -n shared_lv1 shared_vg2 Logical volume \"shared_lv1\" created. mkfs.gfs2 -j2 -p lock_dlm -t my_cluster:gfs2-demo1 /dev/shared_vg1/shared_lv1 mkfs.gfs2 -j2 -p lock_dlm -t my_cluster:gfs2-demo2 /dev/shared_vg1/shared_lv2 mkfs.gfs2 -j2 -p lock_dlm -t my_cluster:gfs2-demo3 /dev/shared_vg2/shared_lv1", "pcs resource create sharedlv1 --group shared_vg1 ocf:heartbeat:LVM-activate lvname=shared_lv1 vgname=shared_vg1 activation_mode=shared vg_access_mode=lvmlockd", "pcs resource create sharedlv2 --group shared_vg1 ocf:heartbeat:LVM-activate lvname=shared_lv2 vgname=shared_vg1 activation_mode=shared vg_access_mode=lvmlockd", "pcs resource create sharedlv3 --group shared_vg2 ocf:heartbeat:LVM-activate lvname=shared_lv1 vgname=shared_vg2 activation_mode=shared vg_access_mode=lvmlockd", "pcs resource clone shared_vg1 interleave=true pcs resource clone shared_vg2 interleave=true", "pcs constraint order start locking-clone then shared_vg1-clone Adding locking-clone shared_vg1-clone (kind: Mandatory) (Options: first-action=start then-action=start) pcs constraint order start locking-clone then shared_vg2-clone Adding locking-clone shared_vg2-clone (kind: Mandatory) (Options: first-action=start then-action=start)", "pcs constraint colocation add shared_vg1-clone with locking-clone pcs constraint colocation add shared_vg2-clone with locking-clone", "lvs LV VG Attr LSize shared_lv1 shared_vg1 -wi-a----- 5.00g shared_lv2 shared_vg1 -wi-a----- 5.00g shared_lv1 shared_vg2 -wi-a----- 5.00g lvs LV VG Attr LSize shared_lv1 shared_vg1 -wi-a----- 5.00g shared_lv2 shared_vg1 -wi-a----- 5.00g shared_lv1 shared_vg2 -wi-a----- 5.00g", "pcs resource create sharedfs1 --group shared_vg1 ocf:heartbeat:Filesystem device=\"/dev/shared_vg1/shared_lv1\" directory=\"/mnt/gfs1\" fstype=\"gfs2\" options=noatime op monitor interval=10s on-fail=fence pcs resource create sharedfs2 --group shared_vg1 ocf:heartbeat:Filesystem device=\"/dev/shared_vg1/shared_lv2\" directory=\"/mnt/gfs2\" fstype=\"gfs2\" options=noatime op monitor interval=10s on-fail=fence pcs resource create sharedfs3 --group shared_vg2 ocf:heartbeat:Filesystem device=\"/dev/shared_vg2/shared_lv1\" directory=\"/mnt/gfs3\" fstype=\"gfs2\" options=noatime op monitor interval=10s on-fail=fence", "mount | grep gfs2 /dev/mapper/shared_vg1-shared_lv1 on /mnt/gfs1 type gfs2 (rw,noatime,seclabel) /dev/mapper/shared_vg1-shared_lv2 on /mnt/gfs2 type gfs2 (rw,noatime,seclabel) /dev/mapper/shared_vg2-shared_lv1 on /mnt/gfs3 type gfs2 (rw,noatime,seclabel) mount | grep gfs2 /dev/mapper/shared_vg1-shared_lv1 on /mnt/gfs1 type gfs2 (rw,noatime,seclabel) /dev/mapper/shared_vg1-shared_lv2 on /mnt/gfs2 type gfs2 (rw,noatime,seclabel) /dev/mapper/shared_vg2-shared_lv1 on /mnt/gfs3 type gfs2 (rw,noatime,seclabel)", "pcs status --full Cluster name: my_cluster [...] Full list of resources: smoke-apc (stonith:fence_apc): Started z1.example.com Clone Set: locking-clone [locking] Resource Group: locking:0 dlm (ocf::pacemaker:controld): Started z2.example.com lvmlockd (ocf::heartbeat:lvmlockd): Started z2.example.com Resource Group: locking:1 dlm (ocf::pacemaker:controld): Started z1.example.com lvmlockd (ocf::heartbeat:lvmlockd): Started z1.example.com Started: [ z1.example.com z2.example.com ] Clone Set: shared_vg1-clone [shared_vg1] Resource Group: shared_vg1:0 sharedlv1 (ocf::heartbeat:LVM-activate): Started z2.example.com sharedlv2 (ocf::heartbeat:LVM-activate): Started z2.example.com sharedfs1 (ocf::heartbeat:Filesystem): Started z2.example.com sharedfs2 (ocf::heartbeat:Filesystem): Started z2.example.com Resource Group: shared_vg1:1 sharedlv1 (ocf::heartbeat:LVM-activate): Started z1.example.com sharedlv2 (ocf::heartbeat:LVM-activate): Started z1.example.com sharedfs1 (ocf::heartbeat:Filesystem): Started z1.example.com sharedfs2 (ocf::heartbeat:Filesystem): Started z1.example.com Started: [ z1.example.com z2.example.com ] Clone Set: shared_vg2-clone [shared_vg2] Resource Group: shared_vg2:0 sharedlv3 (ocf::heartbeat:LVM-activate): Started z2.example.com sharedfs3 (ocf::heartbeat:Filesystem): Started z2.example.com Resource Group: shared_vg2:1 sharedlv3 (ocf::heartbeat:LVM-activate): Started z1.example.com sharedfs3 (ocf::heartbeat:Filesystem): Started z1.example.com Started: [ z1.example.com z2.example.com ]", "subscription-manager repos --enable=rhel-9-for-x86_64-resilientstorage-rpms", "dnf install lvm2-lockd gfs2-utils dlm", "use_lvmlockd = 1", "pcs property set no-quorum-policy=freeze", "pcs resource create dlm --group locking ocf:pacemaker:controld op monitor interval=30s on-fail=fence", "pcs resource clone locking interleave=true", "pcs resource create lvmlockd --group locking ocf:heartbeat:lvmlockd op monitor interval=30s on-fail=fence", "pcs status --full Cluster name: my_cluster [...] Online: [ z1.example.com (1) z2.example.com (2) ] Full list of resources: smoke-apc (stonith:fence_apc): Started z1.example.com Clone Set: locking-clone [locking] Resource Group: locking:0 dlm (ocf::pacemaker:controld): Started z1.example.com lvmlockd (ocf::heartbeat:lvmlockd): Started z1.example.com Resource Group: locking:1 dlm (ocf::pacemaker:controld): Started z2.example.com lvmlockd (ocf::heartbeat:lvmlockd): Started z2.example.com Started: [ z1.example.com z2.example.com ]", "vgcreate --shared shared_vg1 /dev/sda1 Physical volume \"/dev/sda1\" successfully created. Volume group \"shared_vg1\" successfully created VG shared_vg1 starting dlm lockspace Starting locking. Waiting until locks are ready", "lvmdevices --adddev /dev/sda1", "vgchange --lockstart shared_vg1 VG shared_vg1 starting dlm lockspace Starting locking. Waiting until locks are ready", "lvcreate --activate sy -L5G -n shared_lv1 shared_vg1 Logical volume \"shared_lv1\" created.", "pcs resource create sharedlv1 --group shared_vg1 ocf:heartbeat:LVM-activate lvname=shared_lv1 vgname=shared_vg1 activation_mode=shared vg_access_mode=lvmlockd", "pcs resource clone shared_vg1 interleave=true", "pcs constraint order start locking-clone then shared_vg1-clone Adding locking-clone shared_vg1-clone (kind: Mandatory) (Options: first-action=start then-action=start)", "pcs constraint colocation add shared_vg1-clone with locking-clone", "lvs LV VG Attr LSize shared_lv1 shared_vg1 -wi-a----- 5.00g lvs LV VG Attr LSize shared_lv1 shared_vg1 -wi-a----- 5.00g", "touch /etc/crypt_keyfile chmod 600 /etc/crypt_keyfile", "dd if=/dev/urandom bs=4K count=1 of=/etc/crypt_keyfile 1+0 records in 1+0 records out 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000306202 s, 13.4 MB/s scp /etc/crypt_keyfile [email protected]:/etc/", "scp -p /etc/crypt_keyfile [email protected]:/etc/", "cryptsetup luksFormat /dev/shared_vg1/shared_lv1 --type luks2 --key-file=/etc/crypt_keyfile WARNING! ======== This will overwrite data on /dev/shared_vg1/shared_lv1 irrevocably. Are you sure? (Type 'yes' in capital letters): YES", "pcs resource create crypt --group shared_vg1 ocf:heartbeat:crypt crypt_dev=\"luks_lv1\" crypt_type=luks2 key_file=/etc/crypt_keyfile encrypted_dev=\"/dev/shared_vg1/shared_lv1\"", "ls -l /dev/mapper/ lrwxrwxrwx 1 root root 7 Mar 4 09:52 luks_lv1 -> ../dm-3", "mkfs.gfs2 -j3 -p lock_dlm -t my_cluster:gfs2-demo1 /dev/mapper/luks_lv1 /dev/mapper/luks_lv1 is a symbolic link to /dev/dm-3 This will destroy any data on /dev/dm-3 Are you sure you want to proceed? [y/n] y Discarding device contents (may take a while on large devices): Done Adding journals: Done Building resource groups: Done Creating quota file: Done Writing superblock and syncing: Done Device: /dev/mapper/luks_lv1 Block size: 4096 Device size: 4.98 GB (1306624 blocks) Filesystem size: 4.98 GB (1306622 blocks) Journals: 3 Journal size: 16MB Resource groups: 23 Locking protocol: \"lock_dlm\" Lock table: \"my_cluster:gfs2-demo1\" UUID: de263f7b-0f12-4d02-bbb2-56642fade293", "pcs resource create sharedfs1 --group shared_vg1 ocf:heartbeat:Filesystem device=\"/dev/mapper/luks_lv1\" directory=\"/mnt/gfs1\" fstype=\"gfs2\" options=noatime op monitor interval=10s on-fail=fence", "mount | grep gfs2 /dev/mapper/luks_lv1 on /mnt/gfs1 type gfs2 (rw,noatime,seclabel) mount | grep gfs2 /dev/mapper/luks_lv1 on /mnt/gfs1 type gfs2 (rw,noatime,seclabel)", "pcs status --full Cluster name: my_cluster [...] Full list of resources: smoke-apc (stonith:fence_apc): Started z1.example.com Clone Set: locking-clone [locking] Resource Group: locking:0 dlm (ocf::pacemaker:controld): Started z2.example.com lvmlockd (ocf::heartbeat:lvmlockd): Started z2.example.com Resource Group: locking:1 dlm (ocf::pacemaker:controld): Started z1.example.com lvmlockd (ocf::heartbeat:lvmlockd): Started z1.example.com Started: [ z1.example.com z2.example.com ] Clone Set: shared_vg1-clone [shared_vg1] Resource Group: shared_vg1:0 sharedlv1 (ocf::heartbeat:LVM-activate): Started z2.example.com crypt (ocf::heartbeat:crypt) Started z2.example.com sharedfs1 (ocf::heartbeat:Filesystem): Started z2.example.com Resource Group: shared_vg1:1 sharedlv1 (ocf::heartbeat:LVM-activate): Started z1.example.com crypt (ocf::heartbeat:crypt) Started z1.example.com sharedfs1 (ocf::heartbeat:Filesystem): Started z1.example.com Started: [z1.example.com z2.example.com ]" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_high_availability_clusters/assembly_configuring-gfs2-in-a-cluster-configuring-and-managing-high-availability-clusters
15.3. Testing With Your VDB
15.3. Testing With Your VDB 15.3.1. Testing With Your VDB In Teiid Designer you can execute a VDB to test/query actual data. The requirements for VDB execution are: A deployed VDB backed by valid deployed Data Sources An instance of a Teiid Connection Profile configured for the deployed VDB Teiid Designer simplifies this process via Deploy VDB and Execute VDB actions. Deploy VDB does just that, deploy a selected VDB to a running Teiid instance. Execute VDB performs the VDB deployment, creates a Teiid Connection Profile, opens the Database Development perspective and creates a connection to your VDB. 15.3.2. Creating Data Sources The mechanism by which VDBs are able to query actual data sources is the Data Source. These are deployed configurations backed by database or source connection jars. Each source model referenced within a VDB requires a JNDI name representing a deployed Data Source. When creating VDBs you do not need to have deployed data sources on your JBoss Data Virtualization server, but if you wish to test your VDB, the data sources need to be present. Teiid Designer provides a Create Data Source action so you can create compatible data sources for your source model. If you wish to create a data source for a specific model you can select that source model in your workspace and select the Modeling > Create Data Source action. This will extract the connection profile data from your source model and create a corresponding data source on your default JBoss Data Virtualization server. You can also create data sources from the Server view. Expand a JBoss Data Virtualization server instance in the Server view, select a particular data source and right-click the Create Data Source action. This will launch the Create Data Source dialog. Figure 15.10. Create Data Source Dialog You can either select and existing Connection Profile from the drop-down list (Use Connection Profile Info option) or check the Use Model Info option and select an existing source model containing connection info. After creating your new data source it should now be shown in the Data Sources folder of the corresponding JBoss Data Virtualization server. 15.3.3. Execute VDB from Model Explorer If you have a JBoss Data Virtualization instance defined and connected in your Server view you can: Right-click a VDB in your Model Explorer select Modeling > Execute VDB action. This action will insure your selected VDB is deployed to JBoss Data Virtualization, create a Teiid Connection Profile specific for that VDB, open the Database Development perspective and create a connection to your VDB. Figure 15.11. Execute VDB Action In the SQL Scrapbook , enter your designer SQL (i.e. SELECT * FROM TableXXXX ), select all text and right-click select Execute Selected Text . Figure 15.12. SQL Scrapbook Editor Results of query should be displayed in the SQL Results view on the Result1 tab. Figure 15.13. SQL Results View 15.3.4. Deploy VDB from Model Explorer You can also deploy your VDB first by selecting it in the Model Explorer and dragging/dropping it onto a connected JBoss Data Virtualization instance in the Server view, or right-click select Modeling > Deploy action. Once deployed, you can select the VDB in the Server view and right-click select the Execute VDB action there. This will create a Teiid Connection Profile specific for that VDB, open the Database Development perspective and create a connection to your VDB. Continue with Step's 2 and 3 above. Note If you do not have a JBoss Data Virtualization instance defined or your default JBoss Data Virtualization instance is disconnected, the following dialog will be displayed if the Modeling > Deploy action is launched. Figure 15.14. No Teiid Instance Defined 15.3.5. Executing a Deployed VDB To execute a VDB, that's been deployed manually, follow the steps below: Open the Database Development perspective. Select the Database Connections folder and choose the New Connection Profile action to display the New Connection Profile dialog. Figure 15.15. New Connection Profile Dialog Enter unique name for your profile, select an existing connection profile type and hit . In the Teiid Profile Wizard page, select the New Driver Definition button to locate and select the Teiid client jar on your file system. Configure your URL using your VDB Name, Host, Port, Username (default = admin) and Password (default = teiid). Figure 15.16. Teiid Connection Profile Dialog Select to view a summary of your new Teiid Connection Profile. Figure 15.17. Teiid Connection Profile Summary Select Finish . Select your new Teiid connection profile and right-click select Open SQL Scrapbook , enter your designer SQL (i.e. SELECT * FROM TableXXXX), select all text and right-click select Execute Selected Text . Figure 15.18. SQL Scrapbook Editor
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/user_guide_volume_1_teiid_designer/sect-testing_with_your_vdb
Chapter 10. Monitoring and Logging
Chapter 10. Monitoring and Logging Log management is an important component of monitoring the security status of your OpenStack deployment. Logs provide insight into the BAU actions of administrators, projects, and instances, in addition to the component activities that comprise your OpenStack deployment. Logs are not only valuable for proactive security and continuous compliance activities, but they are also a valuable information source for investigation and incident response. For example, analyzing the keystone access logs could alert you to failed logins, their frequency, origin IP, and whether the events are restricted to select accounts, among other pertinent information. The director includes intrusion detection capabilities using AIDE, and CADF auditing for keystone. For more information, see the director hardening chapter. 10.1. Harden the monitoring infrastructure Centralized logging systems are a high value target for intruders, as a successful breach could allow them to erase or tamper with the record of events. It is recommended you harden the monitoring platform with this in mind. In addition, consider making regular backups of these systems, with failover planning in the event of an outage or DoS. 10.2. Example events to monitor Event monitoring is a more proactive approach to securing an environment, providing real-time detection and response. Multiple tools exist which can aid in monitoring. For an OpenStack deployment, you will need to monitor the hardware, the OpenStack services, and the cloud resource usage. This section describes some example events you might need to be aware of. Important This list is not exhaustive. You will need to consider additional use cases that might apply to your specific network, and that you might consider anomalous behavior. Detecting the absence of log generation is an event of high value. Such a gap might indicate a service failure, or even an intruder who has temporarily switched off logging or modified the log level to hide their tracks. Application events, such as start or stop events, that were unscheduled might have possible security implications. Operating system events on the OpenStack nodes, such as user logins or restarts. These can provide valuable insight into distinguishing between proper and improper usage of systems. Networking bridges going down. This would be an actionable event due to the risk of service outage. IPtables flushing events on Compute nodes, and the resulting loss of access to instances. To reduce security risks from orphaned instances on a user, project, or domain deletion in the Identity service there is discussion to generate notifications in the system and have OpenStack components respond to these events as appropriate such as terminating instances, disconnecting attached volumes, reclaiming CPU and storage resources and so on. Security monitoring controls such as intrusion detection software, antivirus software, and spyware detection and removal utilities can generate logs that show when and how an attack or intrusion took place. These tools can provide a layer of protection when deployed on the OpenStack nodes. Project users might also want to run such tools on their instances.
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/security_and_hardening_guide/monitoring_and_logging
28.6.2. Booting Your Computer with the Rescue Mode
28.6.2. Booting Your Computer with the Rescue Mode You may boot a command-line Linux system from either a rescue disc or an installation disc, without installing Red Hat Enterprise Linux on the computer. This enables you to use the utilities and functions of a running Linux system to modify or repair systems that are already installed on your computer. The rescue disc starts the rescue mode system by default. To load the rescue system with the installation disc, choose Rescue installed system from the boot menu. Specify the language, keyboard layout and network settings for the rescue system with the screens that follow. The final setup screen configures access to the existing system on your computer. By default, rescue mode attaches an existing operating system to the rescue system under the directory /mnt/sysimage/ .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/sn-mode-rescue
3.10. Searching for Hosts
3.10. Searching for Hosts The following table describes all search options for hosts. Table 3.6. Searching for Hosts Property (of resource or resource-type) Type Description (Reference) Vms. Vms-prop Depends on property type The property of the virtual machines associated with the host. Templates. templates-prop Depends on property type The property of the templates associated with the host. Events. events-prop Depends on property type The property of the events associated with the host. Users. users-prop Depends on property type The property of the users associated with the host. name String The name of the host. status List The availability of the host. external_status String The health status of the host as reported by external systems and plug-ins. cluster String The cluster to which the host belongs. address String The unique name that identifies the host on the network. cpu_usage Integer The percent of processing power used. mem_usage Integer The percentage of memory used. network_usage Integer The percentage of network usage. load Integer Jobs waiting to be executed in the run-queue per processor, in a given time slice. version Integer The version number of the operating system. cpus Integer The number of CPUs on the host. memory Integer The amount of memory available. cpu_speed Integer The processing speed of the CPU. cpu_model String The type of CPU. active_vms Integer The number of virtual machines currently running. migrating_vms Integer The number of virtual machines currently being migrated. committed_mem Integer The percentage of committed memory. tag String The tag assigned to the host. type String The type of host. datacenter String The data center to which the host belongs. sortby List Sorts the returned results by one of the resource properties. page Integer The page number of results to display. Example Hosts: cluster = Default and Vms.os = rhel6 This example returns a list of hosts which are part of the Default cluster and host virtual machines running the Red Hat Enterprise Linux 6 operating system.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/searching_for_hosts
12.5.2. Multiple Views
12.5.2. Multiple Views Through the use of the view statement in named.conf , BIND can present different information depending which network a request originates from. This is primarily used to deny sensitive DNS entries from clients outside of the local network, while allowing queries from clients inside the local network. The view statement uses the match-clients option to match IP addresses or entire networks and give them special options and zone data.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-bind-features-views
Chapter 9. Troubleshooting Dev Spaces
Chapter 9. Troubleshooting Dev Spaces This section provides troubleshooting procedures for the most frequent issues a user can come in conflict with. Additional resources Section 9.1, "Viewing Dev Spaces workspaces logs" Section 9.2, "Troubleshooting slow workspaces" Section 9.3, "Troubleshooting network problems" Section 9.4, "Troubleshooting webview loading error" 9.1. Viewing Dev Spaces workspaces logs You can view OpenShift Dev Spaces logs to better understand and debug background processes should a problem occur. An IDE extension misbehaves or needs debugging The logs list the plugins that have been loaded by the editor. The container runs out of memory The logs contain an OOMKilled error message. Processes running in the container attempted to request more memory than is configured to be available to the container. A process runs out of memory The logs contain an error message such as OutOfMemoryException . A process inside the container ran out of memory without the container noticing. Additional resources Section 9.1.1, "Workspace logs in CLI" Section 9.1.2, "Workspace logs in OpenShift console" Section 9.1.3, "Language servers and debug adapters logs in the editor" 9.1.1. Workspace logs in CLI You can use the OpenShift CLI to observe the OpenShift Dev Spaces workspace logs. Prerequisites The OpenShift Dev Spaces workspace <workspace_name> is running. Your OpenShift CLI session has access to the OpenShift project <namespace_name> containing this workspace. Procedure Get the logs from the pod running the <workspace_name> workspace in the <namespace_name> project: 9.1.2. Workspace logs in OpenShift console You can use the OpenShift console to observe the OpenShift Dev Spaces workspace logs. Procedure In the OpenShift Dev Spaces dashboard, go to Workspaces . Click on a workspace name to display the workspace overview page. This page displays the OpenShift project name <project_name> . Click on the upper right Applications menu, and click the OpenShift console link. Run the steps in the OpenShift console, in the Administrator perspective. Click Workloads > Pods to see a list of all the active workspaces. In the Project drop-down menu, select the <project_name> project to narrow the search. Click on the name of the running pod that runs the workspace. The Details tab contains the list of all containers with additional information. Go to the Logs tab. 9.1.3. Language servers and debug adapters logs in the editor In the Microsoft Visual Studio Code - Open Source editor running in your workspace, you can configure the installed language server and debug adapter extensions to view their logs. Procedure Configure the extension: click File > Preferences > Settings , expand the Extensions section, search for your extension, and set the trace.server or similar configuration to verbose , if such configuration exists. Refer to the extension documentation for further configuration. View your language server logs by clicking View Output , and selecting your language server in the drop-down list for the Output view. Additional resources Open VSX registry 9.2. Troubleshooting slow workspaces Sometimes, workspaces can take a long time to start. Tuning can reduce this start time. Depending on the options, administrators or users can do the tuning. This section includes several tuning options for starting workspaces faster or improving workspace runtime performance. 9.2.1. Improving workspace start time Caching images with Image Puller Role: Administrator When starting a workspace, OpenShift pulls the images from the registry. A workspace can include many containers meaning that OpenShift pulls Pod's images (one per container). Depending on the size of the image and the bandwidth, it can take a long time. Image Puller is a tool that can cache images on each of OpenShift nodes. As such, pre-pulling images can improve start times. See https://access.redhat.com/documentation/en-us/red_hat_openshift_dev_spaces/3.15/html-single/administration_guide/index#administration-guide:caching-images-for-faster-workspace-start . Choosing better storage type Role: Administrator and user Every workspace has a shared volume attached. This volume stores the project files, so that when restarting a workspace, changes are still available. Depending on the storage, attach time can take up to a few minutes, and I/O can be slow. Installing offline Role: Administrator Components of OpenShift Dev Spaces are OCI images. Set up Red Hat OpenShift Dev Spaces in offline mode to reduce any extra download at runtime because everything needs to be available from the beginning. See https://access.redhat.com/documentation/en-us/red_hat_openshift_dev_spaces/3.15/html-single/administration_guide/index#administration-guide:installing-che-in-a-restricted-environment . Reducing the number of public endpoints Role: Administrator For each endpoint, OpenShift is creating OpenShift Route objects. Depending on the underlying configuration, this creation can be slow. To avoid this problem, reduce the exposure. For example, to automatically detect a new port listening inside containers and redirect traffic for the processes using a local IP address ( 127.0.0.1 ), Microsoft Visual Code - Open Source has three optional routes. By reducing the number of endpoints and checking endpoints of all plugins, workspace start can be faster. 9.2.2. Improving workspace runtime performance Providing enough CPU resources Plugins consume CPU resources. For example, when a plugin provides IntelliSense features, adding more CPU resources can improve performance. Ensure the CPU settings in the devfile definition, devfile.yaml , are correct: components: - name: tools container: image: quay.io/devfile/universal-developer-image:ubi8-latest cpuLimit: 4000m 1 cpuRequest: 1000m 2 1 Specifies the CPU limit 2 Specifies the CPU request Providing enough memory Plug-ins consume CPU and memory resources. For example, when a plugin provides IntelliSense features, collecting data can consume all the memory allocated to the container. Providing more memory to the container can increase performance. Ensure that memory settings in the devfile definition devfile.yaml file are correct. components: - name: tools container: image: quay.io/devfile/universal-developer-image:ubi8-latest memoryLimit: 6G 1 memoryRequest: 512Mi 2 1 Specifies the memory limit 2 Specifies the memory request 9.3. Troubleshooting network problems This section describes how to prevent or resolve issues related to network policies. OpenShift Dev Spaces requires the availability of the WebSocket Secure (WSS) connections. Secure WebSocket connections improve confidentiality and also reliability because they reduce the risk of interference by bad proxies. Prerequisites The WebSocket Secure (WSS) connections on port 443 must be available on the network. Firewall and proxy may need additional configuration. Procedure Verify the browser supports the WebSocket protocol. See: Searching a websocket test . Verify firewalls settings: WebSocket Secure (WSS) connections on port 443 must be available. Verify proxy servers settings: The proxy transmits and intercepts WebSocket Secure (WSS) connections on port 443. 9.4. Troubleshooting webview loading error If you use Microsoft Visual Studio Code - Open Source in a private browsing window, you might encounter the following error message: Error loading webview: Error: Could not register service workers . This is a known issue affecting following browsers: Google Chrome in Incognito mode Mozilla Firefox in Private Browsing mode Table 9.1. Dealing with the webview error in a private browsing window Browser Workarounds Google Chrome Go to Settings Privacy and security Cookies and other site data Allow all cookies . Mozilla Firefox Webviews are not supported in Private Browsing mode. See this reported bug for details.
[ "oc logs --follow --namespace=' <workspace_namespace> ' --selector='controller.devfile.io/devworkspace_name= <workspace_name> '", "components: - name: tools container: image: quay.io/devfile/universal-developer-image:ubi8-latest cpuLimit: 4000m 1 cpuRequest: 1000m 2", "components: - name: tools container: image: quay.io/devfile/universal-developer-image:ubi8-latest memoryLimit: 6G 1 memoryRequest: 512Mi 2" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_dev_spaces/3.15/html/user_guide/troubleshooting-devspaces
Chapter 57. Networking
Chapter 57. Networking Verification of signatures using the MD5 hash algorithm is disabled in Red Hat Enterprise Linux 7 It is impossible to connect to any Wi-Fi Protected Access (WPA) Enterprise Access Point (AP) that requires MD5 signed certificates. To work around this problem, copy the wpa_supplicant.service file from the /usr/lib/systemd/system/ directory to the /etc/systemd/system/ directory and add the following line to the Service section of the file: Then run the systemctl daemon-reload command as root to reload the service file. Important: Note that MD5 certificates are highly insecure and Red Hat does not recommend using them. (BZ#1062656) Mellanox PMD in DPDK causes a performance drop when IOMMU is enabled inside the guest When running Mellanox Poll Mode Driver (PMD) in Data Plane Development Kit (DPDK) in the guest, a performance drop is expected if the iommu=pt option is not set. To make Mellanox PMD work properly, I/O memory management unit (IOMMU) needs to be explicitly enabled in the kernel, and use the passthrough mode. For doing that, pass the intel_iommu=on option (for Intel systems) to the kernel command line. In addition, use iommu=pt to have a proper I/O performance. (BZ#1578688) freeradius might fail when upgrading from RHEL 7.3 A new configuration property, correct_escapes , in the /etc/raddb/radiusd.conf file was introduced in the freeradius version distributed since RHEL 7.4. When an administrator sets correct_escapes to true , the new regular expression syntax for backslash escaping is expected. If correct_escapes is set to false , the old syntax is expected where backslashes are also escaped. For backward compatibility reasons, false is the default value. When upgrading, configuration files in the /etc/raddb/ directory are overwritten unless modified by the administrator, so the value of correct_escapes might not always correspond to which type of syntax is used in all the configuration files. As a consequence, authentication with freeradius might fail. To prevent the problem from occurring, after upgrading from freeradius version 3.0.4 (distributed with RHEL 7.3) and earlier, make sure all configuration files in the /etc/raddb/ directory use the new escaping syntax (no double backslash characters can be found) and that the value of correct_escapes in /etc/raddb/radiusd.conf is set to true . For more information and examples, see the solution at https://access.redhat.com/solutions/3241961 . (BZ#1489758)
[ "Environment=OPENSSL_ENABLE_MD5_VERIFY=1" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.6_release_notes/known_issues_networking
Image APIs
Image APIs OpenShift Container Platform 4.18 Reference guide for image APIs Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html-single/image_apis/index
Chapter 2. Understanding ephemeral storage
Chapter 2. Understanding ephemeral storage 2.1. Overview In addition to persistent storage, pods and containers can require ephemeral or transient local storage for their operation. The lifetime of this ephemeral storage does not extend beyond the life of the individual pod, and this ephemeral storage cannot be shared across pods. Pods use ephemeral local storage for scratch space, caching, and logs. Issues related to the lack of local storage accounting and isolation include the following: Pods do not know how much local storage is available to them. Pods cannot request guaranteed local storage. Local storage is a best effort resource. Pods can be evicted due to other pods filling the local storage, after which new pods are not admitted until sufficient storage has been reclaimed. Unlike persistent volumes, ephemeral storage is unstructured and the space is shared between all pods running on a node, in addition to other uses by the system, the container runtime, and OpenShift Container Platform. The ephemeral storage framework allows pods to specify their transient local storage needs. It also allows OpenShift Container Platform to schedule pods where appropriate, and to protect the node against excessive use of local storage. While the ephemeral storage framework allows administrators and developers to better manage this local storage, it does not provide any promises related to I/O throughput and latency. 2.2. Types of ephemeral storage Ephemeral local storage is always made available in the primary partition. There are two basic ways of creating the primary partition: root and runtime. Root This partition holds the kubelet root directory, /var/lib/kubelet/ by default, and /var/log/ directory. This partition can be shared between user pods, the OS, and Kubernetes system daemons. This partition can be consumed by pods through EmptyDir volumes, container logs, image layers, and container-writable layers. Kubelet manages shared access and isolation of this partition. This partition is ephemeral, and applications cannot expect any performance SLAs, such as disk IOPS, from this partition. Runtime This is an optional partition that runtimes can use for overlay file systems. OpenShift Container Platform attempts to identify and provide shared access along with isolation to this partition. Container image layers and writable layers are stored here. If the runtime partition exists, the root partition does not hold any image layer or other writable storage. 2.3. Ephemeral storage management Cluster administrators can manage ephemeral storage within a project by setting quotas that define the limit ranges and number of requests for ephemeral storage across all pods in a non-terminal state. Developers can also set requests and limits on this compute resource at the pod and container level. 2.4. Monitoring ephemeral storage You can use /bin/df as a tool to monitor ephemeral storage usage on the volume where ephemeral container data is located, which is /var/lib/kubelet and /var/lib/containers . The available space for only /var/lib/kubelet is shown when you use the df command if /var/lib/containers is placed on a separate disk by the cluster administrator. To show the human-readable values of used and available space in /var/lib , enter the following command: USD df -h /var/lib The output shows the ephemeral storage usage in /var/lib : Example output Filesystem Size Used Avail Use% Mounted on /dev/sda1 69G 32G 34G 49% /
[ "df -h /var/lib", "Filesystem Size Used Avail Use% Mounted on /dev/sda1 69G 32G 34G 49% /" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/storage/understanding-ephemeral-storage
21.2. FTP
21.2. FTP The File Transfer Protocol ( FTP ) is one of the oldest and most commonly used protocols found on the Internet today. Its purpose is to reliably transfer files between computer hosts on a network without requiring the user to log directly in to the remote host or to have knowledge of how to use the remote system. It allows users to access files on remote systems using a standard set of simple commands. This section outlines the basics of the FTP protocol and introduces vsftpd , the primary FTP server shipped with Red Hat Enterprise Linux. 21.2.1. The File Transfer Protocol FTP uses a client-server architecture to transfer files using the TCP network protocol. Because FTP is a rather old protocol, it uses unencrypted user name and password authentication. For this reason, it is considered an insecure protocol and should not be used unless absolutely necessary. However, because FTP is so prevalent on the Internet, it is often required for sharing files to the public. System administrators, therefore, should be aware of FTP 's unique characteristics. This section describes how to configure vsftpd to establish connections secured by TLS and how to secure an FTP server with the help of SELinux . A good substitute for FTP is sftp from the OpenSSH suite of tools. For information about configuring OpenSSH and about the SSH protocol in general, see Chapter 14, OpenSSH . Unlike most protocols used on the Internet, FTP requires multiple network ports to work properly. When an FTP client application initiates a connection to an FTP server, it opens port 21 on the server - known as the command port . This port is used to issue all commands to the server. Any data requested from the server is returned to the client via a data port . The port number for data connections, and the way in which data connections are initialized, vary depending upon whether the client requests the data in active or passive mode. The following defines these modes: active mode Active mode is the original method used by the FTP protocol for transferring data to the client application. When an active-mode data transfer is initiated by the FTP client, the server opens a connection from port 20 on the server to the IP address and a random, unprivileged port (greater than 1024 ) specified by the client. This arrangement means that the client machine must be allowed to accept connections over any port above 1024 . With the growth of insecure networks, such as the Internet, the use of firewalls for protecting client machines is now prevalent. Because these client-side firewalls often deny incoming connections from active-mode FTP servers, passive mode was devised. passive mode Passive mode, like active mode, is initiated by the FTP client application. When requesting data from the server, the FTP client indicates it wants to access the data in passive mode and the server provides the IP address and a random, unprivileged port (greater than 1024 ) on the server. The client then connects to that port on the server to download the requested information. While passive mode does resolve issues for client-side firewall interference with data connections, it can complicate administration of the server-side firewall. You can reduce the number of open ports on a server by limiting the range of unprivileged ports on the FTP server. This also simplifies the process of configuring firewall rules for the server. See Section 21.2.2.6.8, "Network Options" for more information about limiting passive ports.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s1-FTP
E.10. Additional Resources
E.10. Additional Resources This chapter is only intended as an introduction to GRUB. Consult the following resources to discover more about how GRUB works. E.10.1. Installed Documentation /usr/share/doc/grub- <version-number> / - This directory contains good information about using and configuring GRUB, where <version-number> corresponds to the version of the GRUB package installed. info grub - The GRUB info page contains a tutorial, a user reference manual, a programmer reference manual, and a FAQ document about GRUB and its usage.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/s1-grub-additional-resources
Chapter 16. Replacing a failed disk
Chapter 16. Replacing a failed disk If a disk in your Ceph Storage cluster fails, you can replace it. 16.1. Replacing a disk See Adding OSDs in the Red Hat Ceph Storage Installation Guide for information on replacing a failed disk.
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/deploying_red_hat_ceph_storage_and_red_hat_openstack_platform_together_with_director/assembly_replacing-a-failed-disk_deployingcontainerizedrhcs
8.250. unixODBC
8.250. unixODBC 8.250.1. RHBA-2014:0869 - unixODBC bug fix update Updated unixODBC packages that fix two bugs are now available for Red Hat Enterprise Linux 6. The unixODBC packages contain a framework that supports accessing databases through the ODBC protocol. Bug Fixes BZ# 768986 Prior to this update, the desktop file for unixODBC, ODBCConfig.desktop, contained deprecated options and incorrect values. Consequently, the unixODBC application was not appropriately categorized. In this update, the options and values have been fixed and the application categorization works as intended. BZ# 1060225 Previously, file name values were hard-coded in the ODBC Driver Manager. As a consequence, the Driver Manager did not correctly interact with other applications after an update. The current update changes the hard-coded values to dynamically determined ones, and updating no longer causes Driver Manager incompatibilities with other applications. Users of unixODBC are advised to upgrade to these updated packages, which fix these bugs.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/unixodbc
Chapter 13. Configuring and setting up remote jobs
Chapter 13. Configuring and setting up remote jobs Red Hat Satellite supports remote execution of commands on hosts. Using remote execution, you can perform various tasks on multiple hosts simultaneously. 13.1. Remote execution in Red Hat Satellite With remote execution, you can run jobs on hosts from Capsules by using shell scripts or Ansible roles and playbooks. Use remote execution for the following benefits in Satellite: Run jobs on multiple hosts at once. Use variables in your commands for more granular control over the jobs you run. Use host facts and parameters to populate the variable values. Specify custom values for templates when you run the command. Communication for remote execution occurs through Capsule Server, which means that Satellite Server does not require direct access to the target host, and can scale to manage many hosts. To use remote execution, you must define a job template. A job template is a command that you want to apply to remote hosts. You can execute a job template multiple times. Satellite uses ERB syntax job templates. For more information, see Appendix B, Template writing reference . By default, Satellite includes several job templates for shell scripts and Ansible. For more information, see Setting up Job Templates in Managing hosts . Additional resources See Executing a Remote Job in Managing hosts . 13.2. Remote execution workflow For custom Ansible roles that you create, or roles that you download, you must install the package containing the roles on your Capsule Server. Before you can use Ansible roles, you must import the roles into Satellite from the Capsule where they are installed. When you run a remote job on hosts, for every host, Satellite performs the following actions to find a remote execution Capsule to use. Satellite searches only for Capsules that have the remote execution feature enabled. Satellite finds the host's interfaces that have the Remote execution checkbox selected. Satellite finds the subnets of these interfaces. Satellite finds remote execution Capsules assigned to these subnets. From this set of Capsules, Satellite selects the Capsule that has the least number of running jobs. By doing this, Satellite ensures that the jobs load is balanced between remote execution Capsules. If you have enabled Prefer registered through Capsule for remote execution , Satellite runs the REX job by using the Capsule to which the host is registered. By default, Prefer registered through Capsule for remote execution is set to No . To enable it, in the Satellite web UI, navigate to Administer > Settings , and on the Content tab, set Prefer registered through Capsule for remote execution to Yes . This ensures that Satellite performs REX jobs on hosts by the Capsule to which they are registered to. If Satellite does not find a remote execution Capsule at this stage, and if the Fallback to Any Capsule setting is enabled, Satellite adds another set of Capsules to select the remote execution Capsule from. Satellite selects the most lightly loaded Capsule from the following types of Capsules that are assigned to the host: DHCP, DNS and TFTP Capsules assigned to the host's subnets DNS Capsule assigned to the host's domain Realm Capsule assigned to the host's realm Puppet server Capsule Puppet CA Capsule OpenSCAP Capsule If Satellite does not find a remote execution Capsule at this stage, and if the Enable Global Capsule setting is enabled, Satellite selects the most lightly loaded remote execution Capsule from the set of all Capsules in the host's organization and location to execute a remote job. 13.3. Permissions for remote execution You can control which roles can run which jobs within your infrastructure, including which hosts they can target. The remote execution feature provides two built-in roles: Remote Execution Manager : Can access all remote execution features and functionality. Remote Execution User : Can only run jobs. You can clone the Remote Execution User role and customize its filter for increased granularity. If you adjust the filter with the view_job_templates permission on a customized role, you can only see and trigger jobs based on matching job templates. You can use the view_hosts and view_smart_proxies permissions to limit which hosts or Capsules are visible to the role. The execute_template_invocation permission is a special permission that is checked immediately before execution of a job begins. This permission defines which job template you can run on a particular host. This allows for even more granularity when specifying permissions. You can run remote execution jobs against Red Hat Satellite and Capsule registered as hosts to Red Hat Satellite with the execute_jobs_on_infrastructure_hosts permission. Standard Manager and Site Manager roles have this permission by default. If you use either the Manager or Site Manager role, or if you use a custom role with the execute_jobs_on_infrastructure_hosts permission, you can execute remote jobs against registered Red Hat Satellite and Capsule hosts. For more information on working with roles and permissions, see Creating and Managing Roles in Administering Red Hat Satellite . The following example shows filters for the execute_template_invocation permission: Use the first line in this example to apply the Reboot template to one selected host. Use the second line to define a pool of hosts with names ending with .staging.example.com . Use the third line to bind the template with a host group. Note Permissions assigned to users with these roles can change over time. If you have already scheduled some jobs to run in the future, and the permissions change, this can result in execution failure because permissions are checked immediately before job execution. 13.4. Transport modes for remote execution You can configure your Satellite to use two different modes of transport for remote job execution. You can configure single Capsule to use either one mode or the other but not both. Push-based transport On Capsules in ssh mode, remote execution uses the SSH service to transport job details. This is the default transport mode. The SSH service must be enabled and active on the target hosts. The remote execution Capsule must have access to the SSH port on the target hosts. Unless you have a different setting, the standard SSH port is 22. This transport mode supports both Script and Ansible providers. Pull-based transport On Capsules in pull-mqtt mode, remote execution uses Message Queueing Telemetry Transport (MQTT) to initiate the job execution it receives from Satellite Server. The host subscribes to the MQTT broker on Capsule for job notifications by using the yggdrasil pull client. After the host receives a notification from the MQTT broker, it pulls job details from Capsule over HTTPS, runs the job, and reports results back to Capsule. This transport mode supports the Script provider only. To use the pull-mqtt mode, you must enable it on Capsule Server and configure the pull client on hosts. Note If your Capsule already uses the pull-mqtt mode and you want to switch back to the ssh mode, run this satellite-installer command: Additional resources To enable pull mode on Capsule Server, see Configuring pull-based transport for remote execution in Installing Capsule Server . To enable pull mode on a registered host, continue with Section 13.5, "Configuring a host to use the pull client" . To enable pull mode on a new host, continue with the following: Section 2.1, "Creating a host in Red Hat Satellite" Section 4.3, "Registering hosts by using global registration" 13.5. Configuring a host to use the pull client For Capsules configured to use pull-mqtt mode, hosts can subscribe to remote jobs using the remote execution pull client. Hosts do not require an SSH connection from their Capsule Server. Prerequisites You have registered the host to Satellite. The Capsule through which the host is registered is configured to use pull-mqtt mode. For more information, see Configuring pull-based transport for remote execution in Installing Capsule Server . Red Hat Satellite Client 6 repository for the operating system version of the host is synchronized on Satellite Server, available in the content view and the lifecycle environment of the host, and enabled for the host. For more information, see Changing the repository sets status for a host in Satellite in Managing content . The host can communicate with its Capsule over MQTT using port 1883 . The host can communicate with its Capsule over HTTPS. Procedure Install the katello-pull-transport-migrate package on your host: On Red Hat Enterprise Linux 9 and Red Hat Enterprise Linux 8 hosts: On Red Hat Enterprise Linux 7 hosts: The package installs foreman_ygg_worker and yggdrasil as dependencies, configures the yggdrasil client, and starts the pull client worker on the host. Verification Check the status of the yggdrasild service: 13.6. Creating a job template Use this procedure to create a job template. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Hosts > Templates > Job templates . Click New Job Template . Click the Template tab, and in the Name field, enter a unique name for your job template. Select Default to make the template available for all organizations and locations. Create the template directly in the template editor or upload it from a text file by clicking Import . Optional: In the Audit Comment field, add information about the change. Click the Job tab, and in the Job category field, enter your own category or select from the default categories listed in Default Job Template Categories in Managing hosts . Optional: In the Description Format field, enter a description template. For example, Install package %{package_name} . You can also use %{template_name} and %{job_category} in your template. From the Provider Type list, select SSH for shell scripts and Ansible for Ansible tasks or playbooks. Optional: In the Timeout to kill field, enter a timeout value to terminate the job if it does not complete. Optional: Click Add Input to define an input parameter. Parameters are requested when executing the job and do not have to be defined in the template. For examples, see the Help tab. Optional: Click Foreign input set to include other templates in this job. Optional: In the Effective user area, configure a user if the command cannot use the default remote_execution_effective_user setting. Optional: If this template is a snippet to be included in other templates, click the Type tab and select Snippet . Optional: If you use the Ansible provider, click the Ansible tab. Select Enable Ansible Callback to allow hosts to send facts, which are used to create configuration reports, back to Satellite after a job finishes. Click the Location tab and add the locations where you want to use the template. Click the Organizations tab and add the organizations where you want to use the template. Click Submit to save your changes. You can extend and customize job templates by including other templates in the template syntax. For more information, see Template Writing Reference and Job Template Examples and Extensions in Managing hosts . CLI procedure To create a job template using a template-definition file, enter the following command: 13.7. Importing an Ansible Playbook by name You can import Ansible Playbooks by name to Satellite from collections installed on Capsule. Satellite creates a job template from the imported playbook and places the template in the Ansible Playbook - Imported job category. If you have a custom collection, place it in /etc/ansible/collections/ansible_collections/ My_Namespace / My_Collection . Prerequisites Ansible plugin is enabled. Your Satellite account has a role that grants the import_ansible_playbooks permission. Procedure Fetch the available Ansible Playbooks by using the following API request: Select the Ansible Playbook you want to import and note its name. Import the Ansible Playbook by its name: You get a notification in the Satellite web UI after the import completes. steps You can run the playbook by executing a remote job from the created job template. For more information, see Section 13.22, "Executing a remote job" . 13.8. Importing all available Ansible Playbooks You can import all the available Ansible Playbooks to Satellite from collections installed on Capsule. Satellite creates job templates from the imported playbooks and places the templates in the Ansible Playbook - Imported job category. If you have a custom collection, place it in /etc/ansible/collections/ansible_collections/ My_Namespace / My_Collection . Prerequisites Ansible plugin is enabled. Your Satellite account has a role that grants the import_ansible_playbooks permission. Procedure Import the Ansible Playbooks by using the following API request: You get a notification in the Satellite web UI after the import completes. steps You can run the playbooks by executing a remote job from the created job templates. For more information, see Section 13.22, "Executing a remote job" . 13.9. Configuring the fallback to any Capsule remote execution setting in Satellite You can enable the Fallback to Any Capsule setting to configure Satellite to search for remote execution Capsules from the list of Capsules that are assigned to hosts. This can be useful if you need to run remote jobs on hosts that have no subnets configured or if the hosts' subnets are assigned to Capsules that do not have the remote execution feature enabled. If the Fallback to Any Capsule setting is enabled, Satellite adds another set of Capsules to select the remote execution Capsule from. Satellite also selects the most lightly loaded Capsule from the set of all Capsules assigned to the host, such as the following: DHCP, DNS and TFTP Capsules assigned to the host's subnets DNS Capsule assigned to the host's domain Realm Capsule assigned to the host's realm Puppet server Capsule Puppet CA Capsule OpenSCAP Capsule Procedure In the Satellite web UI, navigate to Administer > Settings . Click Remote Execution . Configure the Fallback to Any Capsule setting. CLI procedure Enter the hammer settings set command on Satellite to configure the Fallback to Any Capsule setting. To set the value to true , enter the following command: 13.10. Configuring the global Capsule remote execution setting in Satellite By default, Satellite searches for remote execution Capsules in hosts' organizations and locations regardless of whether Capsules are assigned to hosts' subnets or not. You can disable the Enable Global Capsule setting if you want to limit the search to the Capsules that are assigned to hosts' subnets. If the Enable Global Capsule setting is enabled, Satellite adds another set of Capsules to select the remote execution Capsule from. Satellite also selects the most lightly loaded remote execution Capsule from the set of all Capsules in the host's organization and location to execute a remote job. Procedure In the Satellite web UI, navigate to Administer > Settings . Click Remote Execution . Configure the Enable Global Capsule setting. CLI procedure Enter the hammer settings set command on Satellite to configure the Enable Global Capsule setting. To set the value to true , enter the following command: 13.11. Setting an alternative directory for remote execution jobs in push mode By default, Satellite uses the /var/tmp directory on hosts for remote execution jobs in push mode. If the /var/tmp directory on your host is mounted with the noexec flag, Satellite cannot execute remote execution job scripts in this directory. You can use satellite-installer to set an alternative directory for executing remote execution jobs in push mode. Procedure On your host, create a new directory: Copy the SELinux context from the default /var/tmp directory: Configure your Satellite Server or Capsule Server to use the new directory: 13.12. Setting an alternative directory for remote execution jobs in pull mode By default, Satellite uses the /run directory on hosts for remote execution jobs in pull mode. If the /run directory on your host is mounted with the noexec flag, Satellite cannot execute remote execution job scripts in this directory. You can use the yggdrasild service to set an alternative directory for executing remote execution jobs in pull mode. Procedure On your host, perform these steps: Create a new directory: Access the yggdrasild service configuration: Specify the alternative directory by adding the following line to the configuration: Restart the yggdrasild service: 13.13. Altering the privilege elevation method By default, push-based remote execution uses sudo to switch from the SSH user to the effective user that executes the script on your host. In some situations, you might require to use another method, such as su or dzdo . You can globally configure an alternative method in your Satellite settings. Prerequisites Your user account has a role assigned that grants the view_settings and edit_settings permissions. If you want to use dzdo for Ansible jobs, ensure the community.general Ansible collection, which contains the required dzdo become plugin, is installed. For more information, see Installing collections in Ansible documentation . Procedure Navigate to Administer > Settings . Select the Remote Execution tab. Click the value of the Effective User Method setting. Select the new value. Click Submit . 13.14. Distributing SSH keys for remote execution For Capsules in ssh mode, remote execution connections are authenticated using SSH. The public SSH key from Capsule must be distributed to its attached hosts that you want to manage. Ensure that the SSH service is enabled and running on the hosts. Configure any network or host-based firewalls to enable access to port 22. Use one of the following methods to distribute the public SSH key from Capsule to target hosts: Section 13.15, "Distributing SSH keys for remote execution manually" . Section 13.17, "Using the Satellite API to obtain SSH keys for remote execution" . Section 13.18, "Configuring a Kickstart template to distribute SSH keys during provisioning" . For new Satellite hosts, you can deploy SSH keys to Satellite hosts during registration using the global registration template. For more information, see Registering a Host to Red Hat Satellite Using the Global Registration Template in Managing hosts . Satellite distributes SSH keys for the remote execution feature to the hosts provisioned from Satellite by default. If the hosts are running on Amazon Web Services, enable password authentication. For more information, see New User Accounts . 13.15. Distributing SSH keys for remote execution manually To distribute SSH keys manually, complete the following steps: Procedure Copy the SSH pub key from your Capsule to your target host: Repeat this step for each target host you want to manage. Verification To confirm that the key was successfully copied to the target host, enter the following command on Capsule: 13.16. Adding a passphrase to SSH key used for remote execution By default, Capsule uses a non-passphrase protected SSH key to execute remote jobs on hosts. You can protect the SSH key with a passphrase by following this procedure. Procedure On your Satellite Server or Capsule Server, use ssh-keygen to add a passphrase to your SSH key: steps Users now must use a passphrase when running remote execution jobs on hosts. 13.17. Using the Satellite API to obtain SSH keys for remote execution To use the Satellite API to download the public key from Capsule, complete this procedure on each target host. Procedure On the target host, create the ~/.ssh directory to store the SSH key: Download the SSH key from Capsule: Configure permissions for the ~/.ssh directory: Configure permissions for the authorized_keys file: 13.18. Configuring a Kickstart template to distribute SSH keys during provisioning You can add a remote_execution_ssh_keys snippet to your custom Kickstart template to deploy SSH keys to hosts during provisioning. Kickstart templates that Satellite ships include this snippet by default. Satellite copies the SSH key for remote execution to the systems during provisioning. Procedure To include the public key in newly-provisioned hosts, add the following snippet to the Kickstart template that you use: 13.19. Configuring a keytab for Kerberos ticket granting tickets Use this procedure to configure Satellite to use a keytab to obtain Kerberos ticket granting tickets. If you do not set up a keytab, you must manually retrieve tickets. Procedure Find the ID of the foreman-proxy user: Modify the umask value so that new files have the permissions 600 : Create the directory for the keytab: Create a keytab or copy an existing keytab to the directory: Change the directory owner to the foreman-proxy user: Ensure that the keytab file is read-only: Restore the SELinux context: 13.20. Configuring Kerberos authentication for remote execution You can use Kerberos authentication to establish an SSH connection for remote execution on Satellite hosts. Prerequisites Enroll Satellite Server on the Kerberos server Enroll the Satellite target host on the Kerberos server Configure and initialize a Kerberos user account for remote execution Ensure that the foreman-proxy user on Satellite has a valid Kerberos ticket granting ticket Procedure To install and enable Kerberos authentication for remote execution, enter the following command: To edit the default user for remote execution, in the Satellite web UI, navigate to Administer > Settings and click the Remote Execution tab. In the SSH User row, edit the second column and add the user name for the Kerberos account. Navigate to remote_execution_effective_user and edit the second column to add the user name for the Kerberos account. Verification To confirm that Kerberos authentication is ready to use, run a remote job on the host. For more information, see Executing a Remote Job in Managing hosts . 13.21. Setting up job templates Satellite provides default job templates that you can use for executing jobs. To view the list of job templates, navigate to Hosts > Templates > Job templates . If you want to use a template without making changes, proceed to Executing a Remote Job in Managing hosts . You can use default templates as a base for developing your own. Default job templates are locked for editing. Clone the template and edit the clone. Procedure To clone a template, in the Actions column, select Clone . Enter a unique name for the clone and click Submit to save the changes. Job templates use the Embedded Ruby (ERB) syntax. For more information about writing templates, see the Template Writing Reference in Managing hosts . Ansible considerations To create an Ansible job template, use the following procedure and instead of ERB syntax, use YAML syntax. Begin the template with --- . You can embed an Ansible Playbook YAML file into the job template body. You can also add ERB syntax to customize your YAML Ansible template. You can also import Ansible Playbooks in Satellite. For more information, see Synchronizing Repository Templates in Managing hosts . Parameter variables At run time, job templates can accept parameter variables that you define for a host. Note that only the parameters visible on the Parameters tab at the host's edit page can be used as input parameters for job templates. 13.22. Executing a remote job You can execute a job that is based on a job template against one or more hosts. Note Ansible jobs run in batches on multiple hosts, so you cannot cancel a job running on a specific host. A job completes only after the Ansible Playbook runs on all hosts in the batch. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Monitor > Jobs and click Run job . Select the Job category and the Job template you want to use, then click . Select hosts on which you want to run the job. If you do not select any hosts, the job will run on all hosts you can see in the current context. Note If you want to select a host group and all of its subgroups, it is not sufficient to select the host group as the job would only run on hosts directly in that group and not on hosts in subgroups. Instead, you must either select the host group and all of its subgroups or use this search query: Replace My_Host_Group with the name of the top-level host group. If required, provide inputs for the job template. Different templates have different inputs and some templates do not have any inputs. After entering all the required inputs, click . Optional: To configure advanced settings for the job, fill in the Advanced fields . To learn more about advanced settings, see Section 13.23, "Advanced settings in the job wizard" . Click . Schedule time for the job. To execute the job immediately, keep the pre-selected Immediate execution . To execute the job in future time, select Future execution . To execute the job on regular basis, select Recurring execution . Optional: If you selected future or recurring execution, select the Query type , otherwise click . Static query means that job executes on the exact list of hosts that you provided. Dynamic query means that the list of hosts is evaluated just before the job is executed. If you entered the list of hosts based on some filter, the results can be different from when you first used that filter. Click after you have selected the query type. Optional: If you selected future or recurring execution, provide additional details: For Future execution , enter the Starts at date and time. You also have the option to select the Starts before date and time. If the job cannot start before that time, it will be canceled. For Recurring execution , select the start date and time, frequency, and the condition for ending the recurring job. You can choose the recurrence to never end, end at a certain time, or end after a given number of repetitions. You can also add Purpose - a special label for tracking the job. There can only be one active job with a given purpose at a time. Click after you have entered the required information. Review job details. You have the option to return to any part of the job wizard and edit the information. Click Submit to schedule the job for execution. CLI procedure Enter the following command on Satellite: Find the ID of the job template you want to use: Show the template details to see parameters required by your template: Execute a remote job with custom parameters: Replace My_Search_Query with the filter expression that defines hosts, for example "name ~ My_Pattern " . Additional resources For more information about creating, monitoring, or canceling remote jobs with Hammer CLI, enter hammer job-template --help and hammer job-invocation --help . 13.23. Advanced settings in the job wizard Some job templates require you to enter advanced settings. Some of the advanced settings are only visible to certain job templates. Below is the list of general advanced settings. SSH user A user to be used for connecting to the host through SSH. Effective user A user to be used for executing the job. By default it is the SSH user. If it differs from the SSH user, su or sudo, depending on your settings, is used to switch the accounts. If you set an effective user in the advanced settings, Ansible sets ansible_become_user to your input value and ansible_become to true . This means that if you use the parameters become: true and become_user: My_User within a playbook, these will be overwritten by Satellite. If your SSH user and effective user are identical, Satellite does not overwrite the become_user . Therefore, you can set a custom become_user in your Ansible Playbook. Description A description template for the job. Timeout to kill Time in seconds from the start of the job after which the job should be killed if it is not finished already. Time to pickup Time in seconds after which the job is canceled if it is not picked up by a client. This setting only applies to hosts using pull-mqtt transport. Password Is used if SSH authentication method is a password instead of the SSH key. Private key passphrase Is used if SSH keys are protected by a passphrase. Effective user password Is used if effective user is different from the ssh user. Concurrency level Defines the maximum number of jobs executed at once. This can prevent overload of system resources in a case of executing the job on a large number of hosts. Execution ordering Determines the order in which the job is executed on hosts. It can be alphabetical or randomized. 13.24. Using extended cron lines When scheduling a cron job with remote execution, you can use an extended cron line to specify the cadence of the job. The standard cron line contains five fields that specify minute, hour, day of the month, month, and day of the week. For example, 0 5 * * * stands for every day at 5 AM. The extended cron line provides the following features: You can use # to specify a concrete week day in a month For example: 0 0 * * mon#1 specifies first Monday of the month 0 0 * * fri#3,fri#4 specifies 3rd and 4th Fridays of the month 0 7 * * fri#-1 specifies the last Friday of the month at 07:00 0 7 * * fri#L also specifies the last Friday of the month at 07:00 0 23 * * mon#2,tue specifies the 2nd Monday of the month and every Tuesday, at 23:00 You can use % to specify every n-th day of the month For example: 9 0 * * sun%2 specifies every other Sunday at 00:09 0 0 * * sun%2+1 specifies every odd Sunday 9 0 * * sun%2,tue%3 specifies every other Sunday and every third Tuesday You can use & to specify that the day of the month has to match the day of the week For example: 0 0 30 * 1& specifies 30th day of the month, but only if it is Monday 13.25. Scheduling a recurring Ansible job for a host You can schedule a recurring job to run Ansible roles on hosts. Prerequisites Ensure you have the view_foreman_tasks , view_job_invocations , and view_recurring_logics permissions. Procedure In the Satellite web UI, navigate to Hosts > All Hosts and select the target host on which you want to execute a remote job. On the Ansible tab, select Jobs . Click Schedule recurring job . Define the repetition frequency, start time, and date of the first run in the Create New Recurring Ansible Run window. Click Submit . Optional: View the scheduled Ansible job in host overview or by navigating to Ansible > Jobs . 13.26. Scheduling a recurring Ansible job for a host group You can schedule a recurring job to run Ansible roles on host groups. Procedure In the Satellite web UI, navigate to Configure > Host groups . In the Actions column, select Configure Ansible Job for the host group you want to schedule an Ansible roles run for. Click Schedule recurring job . Define the repetition frequency, start time, and date of the first run in the Create New Recurring Ansible Run window. Click Submit . 13.27. Using Ansible provider for package and errata actions By default, Satellite is configured to use the Script provider templates for remote execution jobs. If you prefer using Ansible job templates for your remote jobs, you can configure Satellite to use them by default for remote execution features associated with them. Note Remember that Ansible job templates only work when remote execution is configured for ssh mode. Procedure In the Satellite web UI, navigate to Administer > Remote Execution Features . Find each feature whose name contains by_search . Change the job template for these features from Katello Script Default to Katello Ansible Default . Click Submit . Satellite now uses Ansible provider templates for remote execution jobs by which you can perform package and errata actions. This applies to job invocations from the Satellite web UI as well as by using hammer job-invocation create with the same remote execution features that you have changed. 13.28. Setting the job rate limit on Capsule You can limit the maximum number of active jobs on a Capsule at a time to prevent performance spikes. The job is active from the time Capsule first tries to notify the host about the job until the job is finished on the host. The job rate limit only applies to mqtt based jobs. Note The optimal maximum number of active jobs depends on the computing resources of your Capsule Server. By default, the maximum number of active jobs is unlimited. Procedure Set the maximum number of active jobs using satellite-installer : For example:
[ "name = Reboot and host.name = staging.example.com name = Reboot and host.name ~ *.staging.example.com name = \"Restart service\" and host_group.name = webservers", "satellite-installer --foreman-proxy-plugin-remote-execution-script-mode=ssh", "dnf install katello-pull-transport-migrate", "yum install katello-pull-transport-migrate", "systemctl status yggdrasild", "hammer job-template create --file \" Path_to_My_Template_File \" --job-category \" My_Category_Name \" --name \" My_Template_Name \" --provider-type SSH", "curl --header 'Content-Type: application/json' --request GET https:// satellite.example.com /ansible/api/v2/ansible_playbooks/fetch?proxy_id= My_Capsule_ID", "curl --data '{ \"playbook_names\": [\" My_Playbook_Name \"] }' --header 'Content-Type: application/json' --request PUT https:// satellite.example.com /ansible/api/v2/ansible_playbooks/sync?proxy_id= My_Capsule_ID", "curl -X PUT -H 'Content-Type: application/json' https:// satellite.example.com /ansible/api/v2/ansible_playbooks/sync?proxy_id= My_Capsule_ID", "hammer settings set --name=remote_execution_fallback_proxy --value=true", "hammer settings set --name=remote_execution_global_proxy --value=true", "mkdir /My_Remote_Working_Directory", "chcon --reference=/var/tmp /My_Remote_Working_Directory", "satellite-installer --foreman-proxy-plugin-remote-execution-script-remote-working-dir /My_Remote_Working_Directory", "mkdir /My_Remote_Working_Directory", "systemctl edit yggdrasild", "Environment=FOREMAN_YGG_WORKER_WORKDIR= /My_Remote_Working_Directory", "systemctl restart yggdrasild", "ssh-copy-id -i ~foreman-proxy/.ssh/id_rsa_foreman_proxy.pub [email protected]", "ssh -i ~foreman-proxy/.ssh/id_rsa_foreman_proxy [email protected]", "ssh-keygen -p -f ~foreman-proxy/.ssh/id_rsa_foreman_proxy", "mkdir ~/.ssh", "curl https:// capsule.example.com :9090/ssh/pubkey >> ~/.ssh/authorized_keys", "chmod 700 ~/.ssh", "chmod 600 ~/.ssh/authorized_keys", "<%= snippet 'remote_execution_ssh_keys' %>", "id -u foreman-proxy", "umask 077", "mkdir -p \"/var/kerberos/krb5/user/ My_User_ID \"", "cp My_Client.keytab /var/kerberos/krb5/user/ My_User_ID /client.keytab", "chown -R foreman-proxy:foreman-proxy \"/var/kerberos/krb5/user/ My_User_ID \"", "chmod -wx \"/var/kerberos/krb5/user/ My_User_ID /client.keytab\"", "restorecon -RvF /var/kerberos/krb5", "satellite-installer --foreman-proxy-plugin-remote-execution-script-ssh-kerberos-auth true", "hostgroup_fullname ~ \" My_Host_Group *\"", "hammer settings set --name=remote_execution_global_proxy --value=false", "hammer job-template list", "hammer job-template info --id My_Template_ID", "hammer job-invocation create --inputs My_Key_1 =\" My_Value_1 \", My_Key_2 =\" My_Value_2 \",... --job-template \" My_Template_Name \" --search-query \" My_Search_Query \"", "satellite-installer --foreman-proxy-plugin-remote-execution-script-mqtt-rate-limit MAX_JOBS_NUMBER", "satellite-installer --foreman-proxy-plugin-remote-execution-script-mqtt-rate-limit 200" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/managing_hosts/configuring_and_setting_up_remote_jobs_managing-hosts
1.2. High Availability Add-On Introduction
1.2. High Availability Add-On Introduction The High Availability Add-On is an integrated set of software components that can be deployed in a variety of configurations to suit your needs for performance, high availability, load balancing, scalability, file sharing, and economy. The High Availability Add-On consists of the following major components: Cluster infrastructure - Provides fundamental functions for nodes to work together as a cluster: configuration file management, membership management, lock management, and fencing. High availability Service Management - Provides failover of services from one cluster node to another in case a node becomes inoperative. Cluster administration tools - Configuration and management tools for setting up, configuring, and managing the High Availability Add-On. The tools are for use with the Cluster Infrastructure components, the high availability and Service Management components, and storage. You can supplement the High Availability Add-On with the following components: Red Hat GFS2 (Global File System 2) - Part of the Resilient Storage Add-On, this provides a cluster file system for use with the High Availability Add-On. GFS2 allows multiple nodes to share storage at a block level as if the storage were connected locally to each cluster node. GFS2 cluster file system requires a cluster infrastructure. Cluster Logical Volume Manager (CLVM) - Part of the Resilient Storage Add-On, this provides volume management of cluster storage. CLVM support also requires cluster infrastructure. HAProxy - Routing software that provides high availability load balancing and failover in layer 4 (TCP) and layer 7 (HTTP, HTTPS) services.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_overview/s1-rhcs-intro-haao
Chapter 18. Using the mount Command
Chapter 18. Using the mount Command On Linux, UNIX, and similar operating systems, file systems on different partitions and removable devices (CDs, DVDs, or USB flash drives for example) can be attached to a certain point (the mount point ) in the directory tree, and then detached again. To attach or detach a file system, use the mount or umount command respectively. This chapter describes the basic use of these commands, as well as some advanced topics, such as moving a mount point or creating shared subtrees. 18.1. Listing Currently Mounted File Systems To display all currently attached file systems, run the mount command with no additional arguments: This command displays the list of known mount points. Each line provides important information about the device name, the file system type, the directory in which it is mounted, and relevant mount options in the following form: device on directory type type ( options ) The findmnt utility, which allows users to list mounted file systems in a tree-like form, is also available from Red Hat Enterprise Linux 6.1. To display all currently attached file systems, run the findmnt command with no additional arguments: 18.1.1. Specifying the File System Type By default, the output of the mount command includes various virtual file systems such as sysfs and tmpfs . To display only the devices with a certain file system type, supply the -t option on the command line: Similarly, to display only the devices with a certain file system type by using the findmnt command, type: For a list of common file system types, refer to Table 18.1, "Common File System Types" . For an example usage, see Example 18.1, "Listing Currently Mounted ext4 File Systems" . Example 18.1. Listing Currently Mounted ext4 File Systems Usually, both / and /boot partitions are formatted to use ext4 . To display only the mount points that use this file system, type the following at a shell prompt: To list such mount points using the findmnt command, type:
[ "mount", "findmnt", "mount -t type", "findmnt -t type", "~]USD mount -t ext4 /dev/sda2 on / type ext4 (rw) /dev/sda1 on /boot type ext4 (rw)", "~]USD findmnt -t ext4 TARGET SOURCE FSTYPE OPTIONS / /dev/sda2 ext4 rw,realtime,seclabel,barrier=1,data=ordered /boot /dev/sda1 ext4 rw,realtime,seclabel,barrier=1,data=ordered" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/ch-mount-command
13.7. The Installation Summary Screen
13.7. The Installation Summary Screen The Installation Summary screen is the central location for setting up an installation. Figure 13.4. The Installation Summary Screen Instead of directing you through consecutive screens, the Red Hat Enterprise Linux installation program allows you to configure your installation in the order you choose. Use your mouse to select a menu item to configure a section of the installation. When you have completed configuring a section, or if you would like to complete that section later, click the Done button located in the upper left corner of the screen. Only sections marked with a warning symbol are mandatory. A note at the bottom of the screen warns you that these sections must be completed before the installation can begin. The remaining sections are optional. Beneath each section's title, the current configuration is summarized. Using this you can determine whether you need to visit the section to configure it further. Once all required sections are complete, click the Begin Installation button. Also see Section 13.18, "Begin Installation" . To cancel the installation, click the Quit button. Note When related background tasks are running, certain menu items might be temporarily unavailable. If you used a Kickstart option or a boot command-line option to specify an installation repository on a network, but no network is available at the start of the installation, the installation program will display the configuration screen for you to set up a network connection prior to displaying the Installation Summary screen. Figure 13.5. Network Configuration Screen When No Network Is Detected You can skip this step if you are installing from an installation DVD or other locally accessible media, and you are certain you will not need network to finish the installation. However, network connectivity is necessary for network installations (see Section 8.11, "Installation Source" ) or for setting up advanced storage devices (see Section 8.15, "Storage Devices" ). For more details about configuring a network in the installation program, see Section 8.12, "Network & Hostname" .
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/installation_guide/sect-graphical-installation-summary-ppc
Chapter 3. Enabling Linux control group version 1 (cgroup v1)
Chapter 3. Enabling Linux control group version 1 (cgroup v1) As of OpenShift Container Platform 4.14, OpenShift Container Platform uses Linux control group version 2 (cgroup v2) in your cluster. If you are using cgroup v1 on OpenShift Container Platform 4.13 or earlier, migrating to OpenShift Container Platform 4.17 will not automatically update your cgroup configuration to version 2. A fresh installation of OpenShift Container Platform 4.14 or later will use cgroup v2 by default. However, you can enable Linux control group version 1 (cgroup v1) upon installation. Enabling cgroup v1 in OpenShift Container Platform disables all cgroup v2 controllers and hierarchies in your cluster. Important cgroup v1 is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. cgroup v2 is the current version of the Linux cgroup API. cgroup v2 offers several improvements over cgroup v1, including a unified hierarchy, safer sub-tree delegation, new features such as Pressure Stall Information , and enhanced resource management and isolation. However, cgroup v2 has different CPU, memory, and I/O management characteristics than cgroup v1. Therefore, some workloads might experience slight differences in memory or CPU usage on clusters that run cgroup v2. You can switch between cgroup v1 and cgroup v2, as needed, by editing the node.config object. For more information, see "Configuring the Linux cgroup on your nodes" in the "Additional resources" of this section. 3.1. Enabling Linux cgroup v1 during installation You can enable Linux control group version 1 (cgroup v1) when you install a cluster by creating installation manifests. Important cgroup v1 is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. Procedure Create or edit the node.config object to specify the v1 cgroup: apiVersion: config.openshift.io/v1 kind: Node metadata: name: cluster spec: cgroupMode: "v2" Proceed with the installation as usual. Additional resources OpenShift Container Platform installation overview Configuring the Linux cgroup on your nodes
[ "apiVersion: config.openshift.io/v1 kind: Node metadata: name: cluster spec: cgroupMode: \"v2\"" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/installation_configuration/enabling-cgroup-v1
Installation overview
Installation overview OpenShift Container Platform 4.15 Overview content for installing OpenShift Container Platform Red Hat OpenShift Documentation Team
[ "oc get nodes", "NAME STATUS ROLES AGE VERSION example-compute1.example.com Ready worker 13m v1.21.6+bb8d50a example-compute2.example.com Ready worker 13m v1.21.6+bb8d50a example-compute4.example.com Ready worker 14m v1.21.6+bb8d50a example-control1.example.com Ready master 52m v1.21.6+bb8d50a example-control2.example.com Ready master 55m v1.21.6+bb8d50a example-control3.example.com Ready master 55m v1.21.6+bb8d50a", "oc get machines -A", "NAMESPACE NAME PHASE TYPE REGION ZONE AGE openshift-machine-api example-zbbt6-master-0 Running 95m openshift-machine-api example-zbbt6-master-1 Running 95m openshift-machine-api example-zbbt6-master-2 Running 95m openshift-machine-api example-zbbt6-worker-0-25bhp Running 49m openshift-machine-api example-zbbt6-worker-0-8b4c2 Running 49m openshift-machine-api example-zbbt6-worker-0-jkbqt Running 49m openshift-machine-api example-zbbt6-worker-0-qrl5b Running 49m", "capabilities: baselineCapabilitySet: v4.11 1 additionalEnabledCapabilities: 2 - CSISnapshot - Console - Storage", "oc get clusterversion version -o jsonpath='{.spec.capabilities}{\"\\n\"}{.status.capabilities}{\"\\n\"}'", "{\"additionalEnabledCapabilities\":[\"openshift-samples\"],\"baselineCapabilitySet\":\"None\"} {\"enabledCapabilities\":[\"openshift-samples\"],\"knownCapabilities\":[\"CSISnapshot\",\"Console\",\"Insights\",\"Storage\",\"baremetal\",\"marketplace\",\"openshift-samples\"]}", "oc patch clusterversion version --type merge -p '{\"spec\":{\"capabilities\":{\"baselineCapabilitySet\":\"vCurrent\"}}}' 1", "oc get clusterversion version -o jsonpath='{.spec.capabilities.additionalEnabledCapabilities}{\"\\n\"}'", "[\"openshift-samples\"]", "oc patch clusterversion/version --type merge -p '{\"spec\":{\"capabilities\":{\"additionalEnabledCapabilities\":[\"openshift-samples\", \"marketplace\"]}}}'", "oc get clusterversion version -o jsonpath='{.status.conditions[?(@.type==\"ImplicitlyEnabledCapabilities\")]}{\"\\n\"}'", "{\"lastTransitionTime\":\"2022-07-22T03:14:35Z\",\"message\":\"The following capabilities could not be disabled: openshift-samples\",\"reason\":\"CapabilitiesImplicitlyEnabled\",\"status\":\"True\",\"type\":\"ImplicitlyEnabledCapabilities\"}" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html-single/installation_overview/index
Chapter 9. SecurityContextConstraints [security.openshift.io/v1]
Chapter 9. SecurityContextConstraints [security.openshift.io/v1] Description SecurityContextConstraints governs the ability to make requests that affect the SecurityContext that will be applied to a container. For historical reasons SCC was exposed under the core Kubernetes API group. That exposure is deprecated and will be removed in a future release - users should instead use the security.openshift.io group to manage SecurityContextConstraints. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required allowHostDirVolumePlugin allowHostIPC allowHostNetwork allowHostPID allowHostPorts allowPrivilegedContainer readOnlyRootFilesystem 9.1. Specification Property Type Description allowHostDirVolumePlugin boolean AllowHostDirVolumePlugin determines if the policy allow containers to use the HostDir volume plugin allowHostIPC boolean AllowHostIPC determines if the policy allows host ipc in the containers. allowHostNetwork boolean AllowHostNetwork determines if the policy allows the use of HostNetwork in the pod spec. allowHostPID boolean AllowHostPID determines if the policy allows host pid in the containers. allowHostPorts boolean AllowHostPorts determines if the policy allows host ports in the containers. allowPrivilegeEscalation `` AllowPrivilegeEscalation determines if a pod can request to allow privilege escalation. If unspecified, defaults to true. allowPrivilegedContainer boolean AllowPrivilegedContainer determines if a container can request to be run as privileged. allowedCapabilities `` AllowedCapabilities is a list of capabilities that can be requested to add to the container. Capabilities in this field maybe added at the pod author's discretion. You must not list a capability in both AllowedCapabilities and RequiredDropCapabilities. To allow all capabilities you may use '*'. allowedFlexVolumes `` AllowedFlexVolumes is a whitelist of allowed Flexvolumes. Empty or nil indicates that all Flexvolumes may be used. This parameter is effective only when the usage of the Flexvolumes is allowed in the "Volumes" field. allowedUnsafeSysctls `` AllowedUnsafeSysctls is a list of explicitly allowed unsafe sysctls, defaults to none. Each entry is either a plain sysctl name or ends in "*" in which case it is considered as a prefix of allowed sysctls. Single * means all unsafe sysctls are allowed. Kubelet has to whitelist all allowed unsafe sysctls explicitly to avoid rejection. Examples: e.g. "foo/\*" allows "foo/bar", "foo/baz", etc. e.g. "foo.*" allows "foo.bar", "foo.baz", etc. apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources defaultAddCapabilities `` DefaultAddCapabilities is the default set of capabilities that will be added to the container unless the pod spec specifically drops the capability. You may not list a capabiility in both DefaultAddCapabilities and RequiredDropCapabilities. defaultAllowPrivilegeEscalation `` DefaultAllowPrivilegeEscalation controls the default setting for whether a process can gain more privileges than its parent process. forbiddenSysctls `` ForbiddenSysctls is a list of explicitly forbidden sysctls, defaults to none. Each entry is either a plain sysctl name or ends in "*" in which case it is considered as a prefix of forbidden sysctls. Single * means all sysctls are forbidden. Examples: e.g. "foo/\*" forbids "foo/bar", "foo/baz", etc. e.g. "foo.*" forbids "foo.bar", "foo.baz", etc. fsGroup `` FSGroup is the strategy that will dictate what fs group is used by the SecurityContext. groups `` The groups that have permission to use this security context constraints kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata priority `` Priority influences the sort order of SCCs when evaluating which SCCs to try first for a given pod request based on access in the Users and Groups fields. The higher the int, the higher priority. An unset value is considered a 0 priority. If scores for multiple SCCs are equal they will be sorted from most restrictive to least restrictive. If both priorities and restrictions are equal the SCCs will be sorted by name. readOnlyRootFilesystem boolean ReadOnlyRootFilesystem when set to true will force containers to run with a read only root file system. If the container specifically requests to run with a non-read only root file system the SCC should deny the pod. If set to false the container may run with a read only root file system if it wishes but it will not be forced to. requiredDropCapabilities `` RequiredDropCapabilities are the capabilities that will be dropped from the container. These are required to be dropped and cannot be added. runAsUser `` RunAsUser is the strategy that will dictate what RunAsUser is used in the SecurityContext. seLinuxContext `` SELinuxContext is the strategy that will dictate what labels will be set in the SecurityContext. seccompProfiles `` SeccompProfiles lists the allowed profiles that may be set for the pod or container's seccomp annotations. An unset (nil) or empty value means that no profiles may be specifid by the pod or container. The wildcard '*' may be used to allow all profiles. When used to generate a value for a pod the first non-wildcard profile will be used as the default. supplementalGroups `` SupplementalGroups is the strategy that will dictate what supplemental groups are used by the SecurityContext. users `` The users who have permissions to use this security context constraints volumes `` Volumes is a white list of allowed volume plugins. FSType corresponds directly with the field names of a VolumeSource (azureFile, configMap, emptyDir). To allow all volumes you may use "*". To allow no volumes, set to ["none"]. 9.2. API endpoints The following API endpoints are available: /apis/security.openshift.io/v1/securitycontextconstraints DELETE : delete collection of SecurityContextConstraints GET : list objects of kind SecurityContextConstraints POST : create SecurityContextConstraints /apis/security.openshift.io/v1/watch/securitycontextconstraints GET : watch individual changes to a list of SecurityContextConstraints. deprecated: use the 'watch' parameter with a list operation instead. /apis/security.openshift.io/v1/securitycontextconstraints/{name} DELETE : delete SecurityContextConstraints GET : read the specified SecurityContextConstraints PATCH : partially update the specified SecurityContextConstraints PUT : replace the specified SecurityContextConstraints /apis/security.openshift.io/v1/watch/securitycontextconstraints/{name} GET : watch changes to an object of kind SecurityContextConstraints. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 9.2.1. /apis/security.openshift.io/v1/securitycontextconstraints HTTP method DELETE Description delete collection of SecurityContextConstraints Table 9.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind SecurityContextConstraints Table 9.2. HTTP responses HTTP code Reponse body 200 - OK SecurityContextConstraintsList schema 401 - Unauthorized Empty HTTP method POST Description create SecurityContextConstraints Table 9.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.4. Body parameters Parameter Type Description body SecurityContextConstraints schema Table 9.5. HTTP responses HTTP code Reponse body 200 - OK SecurityContextConstraints schema 201 - Created SecurityContextConstraints schema 202 - Accepted SecurityContextConstraints schema 401 - Unauthorized Empty 9.2.2. /apis/security.openshift.io/v1/watch/securitycontextconstraints HTTP method GET Description watch individual changes to a list of SecurityContextConstraints. deprecated: use the 'watch' parameter with a list operation instead. Table 9.6. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 9.2.3. /apis/security.openshift.io/v1/securitycontextconstraints/{name} Table 9.7. Global path parameters Parameter Type Description name string name of the SecurityContextConstraints HTTP method DELETE Description delete SecurityContextConstraints Table 9.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 9.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified SecurityContextConstraints Table 9.10. HTTP responses HTTP code Reponse body 200 - OK SecurityContextConstraints schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified SecurityContextConstraints Table 9.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.12. HTTP responses HTTP code Reponse body 200 - OK SecurityContextConstraints schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified SecurityContextConstraints Table 9.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.14. Body parameters Parameter Type Description body SecurityContextConstraints schema Table 9.15. HTTP responses HTTP code Reponse body 200 - OK SecurityContextConstraints schema 201 - Created SecurityContextConstraints schema 401 - Unauthorized Empty 9.2.4. /apis/security.openshift.io/v1/watch/securitycontextconstraints/{name} Table 9.16. Global path parameters Parameter Type Description name string name of the SecurityContextConstraints HTTP method GET Description watch changes to an object of kind SecurityContextConstraints. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 9.17. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/security_apis/securitycontextconstraints-security-openshift-io-v1
function::task_state
function::task_state Name function::task_state - The state of the task. Synopsis Arguments task task_struct pointer. General Syntax task_state:long(task:long) Description Return the state of the given task, one of: TASK_RUNNING (0), TASK_INTERRUPTIBLE (1), TASK_UNINTERRUPTIBLE (2), TASK_STOPPED (4), TASK_TRACED (8), EXIT_ZOMBIE (16), EXIT_DEAD (32).
[ "function task_state:long(task:long)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-task-state
User guide
User guide Red Hat OpenShift Dev Spaces 3.14 Using Red Hat OpenShift Dev Spaces 3.14 Jana Vrbkova [email protected] Red Hat Developer Group Documentation Team [email protected]
null
https://docs.redhat.com/en/documentation/red_hat_openshift_dev_spaces/3.14/html/user_guide/index
D.5. Troubleshooting Matching Rules
D.5. Troubleshooting Matching Rules International collation order matching rules may not behave consistently. Some forms of matching-rule invocation do not work correctly, producing incorrect search results. For example, the following rules do not work: However, the rules listed below will work (note the .3 before the passin value):
[ "ldapsearch -x -p 389 -D \"uid= userID ,ou=people,dc=example,dc=com\" -W -b \"dc=example,dc=com\" \"sn:2.16.840.1.113730.3.3.2.7.1:=passin\" ldapsearch -x -p 389 -D \"uid= userID ,ou=people,dc=example,dc=com\" -W -b \"dc=example,dc=com\" \"sn:de:=passin\"", "ldapsearch -x -p 389 -D \"uid= userID ,ou=people,dc=example,dc=com\" -W -b \"dc=example,dc=com\" \"sn:2.16.840.1.113730.3.3.2.7.1 .3 :=passin\" ldapsearch -x -p 389 -D \"uid= userID ,ou=people,dc=example,dc=com\" -W -b \"dc=example,dc=com\" \"sn:de .3 :=passin\"" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/troubleshooting_matching_rules
3.3. Installing a Client
3.3. Installing a Client The ipa-client-install utility installs and configures an IdM client. The installation process requires you to provide credentials that can be used to enroll the client. The following authentication methods are supported: Credentials of a user authorized to enroll clients, such as admin By default, ipa-client-install expects this option. See Section 3.3.1, "Installing a Client Interactively" for an example. To provide the user credentials directly to ipa-client-install , use the --principal and --password options. A random, one-time password pre-generated on the server To use this authentication method, add the --random option to ipa-client-install option. See Example 3.1, "Installing a Client Non-interactively Using a Random Password" . A principal from a enrollment To use this authentication method, add the --keytab option to ipa-client-install . See Section 3.8, "Re-enrolling a Client into the IdM Domain" for details. See the ipa-client-install (1) man page for details. The following sections document basic installation scenarios. For more details on using ipa-client-install and a complete list of the accepted options, see the ipa-client-install (1) man page. 3.3.1. Installing a Client Interactively The following procedure installs a client while prompting the user for input when required. The user provides credentials of a user authorized to enroll clients into the domain, such as the admin user. Run the ipa-client-install utility. Add the --enable-dns-updates option to update the DNS records with the client machine's IP address if one of the following applies: the IdM server the client will be enrolled with was installed with integrated DNS the DNS server on the network accepts DNS entry updates with the GSS-TSIG protocol Add the --no-krb5-offline-passwords option to disable storing Kerberos passwords in the SSSD cache. The installation script attempts to obtain all the required settings automatically. If your DNS zone and SRV records are set properly on your system, the script automatically discovers all the required values and prints them. Enter yes to confirm. If you want to install the system with different values, cancel the current installation. Then run ipa-client-install again, and specify the required values using command-line options. For details, see the DNS Autodiscovery section in the ipa-client-install (1) man page. If the script fails to obtain some settings automatically, it prompts you for the values. Important Do not use single-label domain names, for example .company: the IdM domain must be composed of one or more subdomains and a top level domain, for example example.com or company.example.com. The fully qualified domain name must meet the following conditions: It is a valid DNS name, which means only numbers, alphabetic characters, and hyphens (-) are allowed. Other characters, such as underscores (_), in the host name cause DNS failures. It is all lower-case. No capital letters are allowed. The fully qualified domain name must not resolve to the loopback address. It must resolve to the machine's public IP address, not to 127.0.0.1 . For other recommended naming practices, see the Recommended Naming Practices in the Red Hat Enterprise Linux Security Guide . The script prompts for a user whose identity will be used to enroll the client. By default, this is the admin user: The installation script now configures the client. Wait for the operation to complete. Run the ipa-client-automount utility, which automatically configures NFS for IdM. See Section 34.2.1, "Configuring NFS Automatically" for details. 3.3.2. Installing a Client Non-interactively For a non-interactive installation, provide all required information to the ipa-client-install utility using command-line options. The minimum required options for a non-interactive installation are: options for specifying the credentials that will be used to enroll the client; see Section 3.3, "Installing a Client" for details --unattended to let the installation run without requiring user confirmation If your DNS zone and SRV records are set properly on your system, the script automatically discovers all the other required values. If the script cannot discover the values automatically, provide them using command-line options. --hostname to specify a static host name for the client machine Important Do not use single-label domain names, for example .company: the IdM domain must be composed of one or more subdomains and a top level domain, for example example.com or company.example.com. The fully qualified domain name must meet the following conditions: It is a valid DNS name, which means only numbers, alphabetic characters, and hyphens (-) are allowed. Other characters, such as underscores (_), in the host name cause DNS failures. It is all lower-case. No capital letters are allowed. The fully qualified domain name must not resolve to the loopback address. It must resolve to the machine's public IP address, not to 127.0.0.1 . For other recommended naming practices, see the Recommended Naming Practices in the Red Hat Enterprise Linux Security Guide . --server to specify the host name of the IdM server the client will be enrolled with --domain to specify the DNS domain name of the IdM server the client will be enrolled with --realm to specify the Kerberos realm name Add the --enable-dns-updates option to update the DNS records with the client machine's IP address if one of the following applies: the IdM server the client will be enrolled with was installed with integrated DNS the DNS server on the network accepts DNS entry updates with the GSS-TSIG protocol Add the --no-krb5-offline-passwords option to disable storing Kerberos passwords in the SSSD cache. For a complete list of options accepted by ipa-client-install , see the ipa-client-install (1) man page. Example 3.1. Installing a Client Non-interactively Using a Random Password This procedure installs a client without prompting the user for any input. The process includes pre-generating a random one-time password on the server that is used to authorize the enrollment. On an existing server: Log in as the administrator: Add the new machine as an IdM host. Use the --random option with the ipa host-add command to generate the random password. The generated password will become invalid after you use it to enroll the machine into the IdM domain. It will be replaced with a proper host keytab after the enrollment is finished. On the machine where you want to install the client, run ipa-client-install , and use these options: --password for the random password from the ipa host-add output Note The password often contains special characters. Therefore, enclose it in single quotes ('). --unattended to let the installation run without requiring user confirmation If your DNS zone and SRV records are set properly on your system, the script automatically discovers all the other required values. If the script cannot discover the values automatically, provide them using command-line options. For example: Run the ipa-client-automount utility, which automatically configures NFS for IdM. See Section 34.2.1, "Configuring NFS Automatically" for details.
[ "Client hostname: client.example.com Realm: EXAMPLE.COM DNS Domain: example.com IPA Server: server.example.com BaseDN: dc=example,dc=com Continue to configure the system with these values? [no]: yes", "User authorized to enroll computers: admin Password for [email protected]", "Client configuration complete.", "kinit admin", "ipa host-add client.example.com --random -------------------------------------------------- Added host \"client.example.com\" -------------------------------------------------- Host name: client.example.com Random password: W5YpARl=7M.n Password: True Keytab: False Managed by: server.example.com", "ipa-client-install --password 'W5YpARl=7M.n' --domain example.com --server server.example.com --unattended" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/client-install
Chapter 3. Python Examples
Chapter 3. Python Examples 3.1. Overview This section provides examples demonstrating the steps to create a virtual machine within a basic Red Hat Virtualization environment, using the Python SDK. These examples use the ovirtsdk Python library provided by the ovirt-engine-sdk-python package. This package is available to systems attached to a Red Hat Virtualization subscription pool in Red Hat Subscription Manager. See Section 1.2, "Installing the Python Software Development Kit" for more information on subscribing your system(s) to download the software. You will also need: A networked installation of Red Hat Virtualization Manager. A networked and configured Red Hat Virtualization Host. An ISO image file containing an operating system for installation on a virtual machine. A working understanding of both the logical and physical objects that make up a Red Hat Virtualization environment. A working understanding of the Python programming language. The examples include placeholders for authentication details ( admin@internal for user name, and password for password). Replace the placeholders with the authentication requirements of your environment. Red Hat Virtualization Manager generates a globally unique identifier (GUID) for the id attribute for each resource. Identifier codes in these examples differ from the identifier codes in your Red Hat Virtualization environment. The examples contain only basic exception and error handling logic. For more information on the exception handling specific to the SDK, see the pydoc for the ovirtsdk.infrastructure.errors module: 3.2. Connecting to the Red Hat Virtualization Manager 3.2.1. Connecting to the Red Hat Virtualization Manager in Version 3 To connect to the Red Hat Virtualization Manager, you must create an instance of the API class from the ovirtsdk.api module by importing the class at the start of the script: from ovirtsdk.api import API The constructor of the API class takes a number of arguments. Supported arguments are: url Specifies the URL of the Manager to connect to, including the /api path. This parameter is mandatory. username Specifies the user name to connect. This parameter is mandatory. password Specifies the password for the user name provided by the username parameter. This parameter is mandatory. kerberos Uses a valid Kerberos ticket to authenticate the connection. Valid values are True and False . This parameter is optional. key_file Specifies a PEM formatted key file containing the private key associated with the certificate specified by cert_file . This parameter is optional. cert_file Specifies a PEM formatted client certificate to be used for establishing the identity of the client on the server. This parameter is optional. ca_file Specifies the certificate file of the certificate authority for the server. This parameter is mandatory unless the insecure parameter is set to True . The certificate is expected to be a copy of the one for the Manager's Certificate Authority. For more information on obtaining the certificate, see the REST API Guide . port Specifies the port to connect using, where it has not been provided as component of the url parameter. This parameter is optional. timeout Specifies the amount of time in seconds that is allowed to pass before a request is to be considered as having timed out. This parameter is optional. persistent_auth Specifies whether persistent authentication is enabled for this connection. Valid values are True and False . This parameter is optional and defaults to False . insecure Allows a connection via SSL without certificate authority. Valid values are True and False . If the insecure parameter is set to False , which is the default, the ca_file must be supplied to secure the connection. This option should be used with caution, as it may allow man-in-the-middle attackers to spoof the identity of the server. filter Specifies whether or not user permission based filter is on or off. Valid values are True and False . If the filter parameter is set to False - which is the default - then the authentication credentials provided must be those of an administrative user. If the filter parameter is set to True then any user can be used and the Manager will filter the actions available to the user based on their permissions. debug Specifies whether debug mode is enabled for this connection. Valid values are True and False . This parameter is optional. Note User names and passwords are written to the debug log, so handle it with care. You can communicate with multiple Red Hat Virtualization Managers by creating and manipulating separate instances of the ovirtsdk.API Python class. This example script creates an instance of the API class, checks that the connection is working using the test() method, and disconnects using the disconnect() method. from ovirtsdk.api import API api = API ( url="https://engine.example.com", username="admin@internal", password="password", ca_file="ca.crt") api.test() print("Connected successfully!") api.disconnect() For a full list of supported methods, you can generate the documentation for the ovirtsdk.api module on the Manager machine: 3.2.2. Connecting to the Red Hat Virtualization Manager in Version 4 To connect to the Red Hat Virtualization Manager, you must create an instance of the Connection class from the ovirtsdk4.sdk module by importing the class at the start of the script: import ovirtsdk4 as sdk The constructor of the Connection class takes a number of arguments. Supported arguments are: url A string containing the base URL of the Manager, such as https://server.example.com/ovirt-engine/api . username Specifies the user name to connect, such as admin@internal . This parameter is mandatory. password Specifies the password for the user name provided by the username parameter. This parameter is mandatory. token An optional token to access the API, instead of a user name and password. If the token parameter is not specified, the SDK will create one automatically. insecure A Boolean flag that indicates whether the server's TLS certificate and host name should be checked. ca_file A PEM file containing the trusted CA certificates. The certificate presented by the server will be verified using these CA certificates. If ca_file parameter is not set, the system-wide CA certificate store is used. debug A Boolean flag indicating whether debug output should be generated. If the value is True and the log parameter is not None , the data sent to and received from the server will be written to the log. Note User names and passwords are written to the debug log, so handle it with care. Compression is disabled in debug mode, which means that debug messages are sent as plain text. log The logger where the log messages will be written. kerberos A Boolean flag indicating whether Kerberos authentication should be used instead of the default basic authentication. timeout The maximum total time to wait for the response, in seconds. A value of 0 (default) means to wait forever. If the timeout expires before the response is received, an exception is raised. compress A Boolean flag indicating whether the SDK should ask the server to send compressed responses. The default is True . This is a hint for the server, which may return uncompressed data even when this parameter is set to True . Compression is disabled in debug mode, which means that debug messages are sent as plain text. sso_url A string containing the base SSO URL of the server. The default SSO URL is computed from the url if no sso_url is provided. sso_revoke_url A string containing the base URL of the SSO revoke service. This needs to be specified only when using an external authentication service. By default, this URL is automatically calculated from the value of the url parameter, so that SSO token revoke will be performed using the SSO service, which is part of the Manager. sso_token_name The token name in the JSON SSO response returned from the SSO server. Default value is access_token . headers A dictionary with headers, which should be sent with every request. connections The maximum number of connections to open to the host. If the value is 0 (default), the number of connections is unlimited. pipeline The maximum number of requests to put in an HTTP pipeline without waiting for the response. If the value is 0 (default), pipelining is disabled. import ovirtsdk4 as sdk # Create a connection to the server: connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) connection.test() print("Connected successfully!") connection.close() For a full list of supported methods, you can generate the documentation for the ovirtsdk.api module on the Manager machine: 3.3. Listing Data Centers The datacenters collection contains all the data centers in the environment. Example 3.1. Listing data centers These examples list the data centers in the datacenters collection and output some basic information about each data center in the collection. V3 from ovirtsdk.api import API from ovirtsdk.xml import params try: api = API (url='https://engine.example.com', username='admin@internal', password='password', ca_file='ca.pem') dc_list = api.datacenters.list() for dc in dc_list: print("%s (%s)" % (dc.get_name(), dc.get_id())) api.disconnect() V4 import ovirtsdk4 as sdk import ovirtsdk4.types as types connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) dcs_service = connection.system_service().dcs_service() dcs = dcs_service.list() for dc in dcs: print("%s (%s)" % (dc.name, dc.id)) connection.close() In an environment where only the Default data center exists, and it is not activated, the examples output the text: 3.4. Listing Clusters The clusters collection contains all clusters in the environment. Example 3.2. Listing clusters These examples list the clusters in the clusters collection and output some basic information about each cluster in the collection. V3 from ovirtsdk.api import API from ovirtsdk.xml import params try: api = API (url='https://engine.example.com', username='admin@internal', password='password', ca_file='ca.pem') c_list = api.clusters.list() for c in c_list: print("%s (%s)" % (c.get_name(), c.get_id())) api.disconnect() V4 import ovirtsdk4 as sdk import ovirtsdk4.types as types connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) cls_service = connection.system_service().clusters_service() cls = cls_service.list() for cl in cls: print("%s (%s)" % (cl.name, cl.id)) connection.close() In an environment where only the Default cluster exists, the examples output the text: 3.5. Listing Hosts The hosts collection contains all hosts in the environment. Example 3.3. Listing hosts These examples list the hosts in the hosts collection and their IDs. V3 from ovirtsdk.api import API from ovirtsdk.xml import params try: api = API(url="https://engine.example.com/ovirt-engine/api", username='admin@internal', password='password', ca_file='ca.pem') h_list = api.hosts.list() for h in h_list: print("%s (%s)" % (h.get_name(), h.get_id())) api.disconnect() V4 import ovirtsdk4 as sdk import ovirtsdk4.types as types connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) host_service = connection.system_service().hosts_service() hosts = host_service.list() for host in hosts: print("%s (%s)" % (host.name, host.id)) connection.close() In an environment where only one host, MyHost , has been attached, the examples output the text: 3.6. Listing Logical Networks The networks collection contains all logical networks in the environment. Example 3.4. Listing logical networks These examples list the logical networks in the networks collection and outputs some basic information about each network in the collection. V3 from ovirtsdk.api import API from ovirtsdk.xml import params try: api = API(url="https://engine.example.com/ovirt-engine/api", username='admin@internal', password='password', ca_file='ca.pem') n_list = api.networks.list() for n in n_list: print("%s (%s)" % (n.get_name(), n.get_id())) api.disconnect() V4 import ovirtsdk4 as sdk import ovirtsdk4.types as types connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) nws_service = connection.system_service().networks_service() nws = nws_service.list() for nw in nws: print("%s (%s)" % (nw.name, nw.id)) connection.close() In an environment where only the default management network exists, the examples output the text: 3.7. Listing Virtual Machines and Total Disk Size The vms collection contains a disks collection that describes the details of each disk attached to a virtual machine. Example 3.5. Listing virtual machines and total disk size These examples print a list of virtual machines and their total disk size in bytes: V3 from ovirtsdk.api import API from ovirtsdk.xml import params try: api = API (url='https://engine.example.com', username='admin@internal', password='password', ca_file='ca.pem') virtual_machines = api.vms.list() if len(virtual_machines) > 0: print("%-30s %s" % ("Name","Disk Size")) print("==================================================") for virtual_machine in virtual_machines: disks = virtual_machine.disks.list() disk_size = 0 for disk in disks: disk_size += disk.get_size() print("%-30s: %d" % (virtual_machine.get_name(), disk_size)) api.disconnect() V4 import ovirtsdk4 as sdk import ovirtsdk4.types as types connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) vms_service = connection.system_service().vms_service() virtual_machines = vms_service.list() if len(virtual_machines) > 0: print("%-30s %s" % ("Name", "Disk Size")) print("==================================================") for virtual_machine in virtual_machines: vm_service = vms_service.vm_service(virtual_machine.id) disk_attachments = vm_service.disk_attachments_service().list() disk_size = 0 for disk_attachment in disk_attachments: disk = connection.follow_link(disk_attachment.disk) disk_size += disk.provisioned_size print("%-30s: %d" % (virtual_machine.name, disk_size)) The examples output the virtual machine names and their disk sizes: 3.8. Creating NFS Data Storage When a Red Hat Virtualization environment is first created, it is necessary to define at least a data storage domain and an ISO storage domain. The data storage domain stores virtual disks while the ISO storage domain stores the installation media for guest operating systems. The storagedomains collection contains all the storage domains in the environment and can be used to add and remove storage domains. Note The code provided in this example assumes that the remote NFS share has been pre-configured for use with Red Hat Virtualization. See the Administration Guide for more information on preparing NFS shares. Example 3.6. Creating NFS data storage These examples add an NFS data domain to the storagedomains collection. For V3, adding an NFS storage domain can be broken down into several steps: Identify the data center to which the storage must be attached, using the get method of the datacenters collection. dc = api.datacenters.get(name="Default") Identify the host that must be used to attach the storage, using the get method of the hosts collection. h = api.hosts.get(name="myhost") Define the Storage parameters for the NFS storage domain. In this example the NFS location 192.0.43.10/storage/data is being used. s = params.Storage(address="_IP_address_", path="/storage/data", type_="nfs") Request creation of the storage domain, using the add method of the storagedomains collection. In addition to the Storage parameters it is necessary to pass: A name for the storage domain. The data center object that was retrieved from the datacenters collection. The host object that was retrieved from the hosts collection. The type of storage domain being added ( data , iso , or export ). The storage format to use ( v1 , v2 , or v3 ). Once these steps are combined, the completed script is: V3 from ovirtsdk.api import API from ovirtsdk.xml import params try: api = API (url='https://engine.example.com', username='admin@internal', password='password', ca_file='ca.pem') dc = api.datacenters.get(name="Default") h = api.hosts.get(name="myhost") s = params.Storage(address="_IP_address_", path="/storage/data", type_="nfs") sd_params = params.StorageDomain(name="mydata", data_center=dc, host=h, type_="data", storage_format="v3", storage=s) try: sd = api.storagedomains.add(sd_params) print("Storage Domain '%s' added (%s)." % (sd.get_name(), sd.get_id())) api.disconnect() V4 For V4, the add method is used to add the new storage domain and the types class is used to pass the parameters. import ovirtsdk4 as sdk import ovirtsdk4.types as types # Create the connection to the server: connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) # Get the reference to the storage domains service: sds_service = connection.system_service().storage_domains_service() # Create a new NFS storage domain: sd = sds_service.add( types.StorageDomain( name='mydata', description='My data', type=types.StorageDomainType.DATA, host=types.Host( name='myhost', ), storage=types.HostStorage( type=types.StorageType.NFS, address='_FQDN_', path='/nfs/ovirt/path/to/mydata', ), ), ) # Wait until the storage domain is unattached: sd_service = sds_service.storage_domain_service(sd.id) while True: time.sleep(5) sd = sd_service.get() if sd.status == types.StorageDomainStatus.UNATTACHED: break print("Storage Domain '%s' added (%s)." % (sd.name(), sd.id())) connection.close() If the add method call is successful, the examples output the text: 3.9. Creating NFS ISO Storage To create a virtual machine, you need installation media for the guest operating system. The installation media are stored in an ISO storage domain. Note The code provided in this example assumes that the remote NFS share has been pre-configured for use with Red Hat Virtualization. See the Administration Guide for more information on preparing NFS shares. Example 3.7. Creating NFS ISO storage These examples add an NFS ISO domain to the storagedomains collection. For V3, adding an NFS storage domain can be broken down into several steps: Identify the data center to which the storage must be attached, using the get method of the datacenters collection. dc = api.datacenters.get( name="Default" ) Identify the host that must be used to attach the storage, using the get method of the hosts collection. h = api.hosts.get(name="myhost") Define the Storage parameters for the NFS storage domain. In this example the NFS location FQDN/storage/iso is being used. s = params.Storage(address="_IP_address_", path="/storage/iso", type_="nfs") Request creation of the storage domain, using the add method of the storagedomains collection. In addition to the Storage parameters it is necessary to pass: A name for the storage domain. The data center object that was retrieved from the datacenters collection. The host object that was retrieved from the hosts collection. The type of storage domain being added ( data , iso , or export ). The storage format to use ( v1 , v2 , or v3 ). V3 from ovirtsdk.api import API from ovirtsdk.xml import params try: api = API (url='https://engine.example.com', username='admin@internal', password='password', ca_file='ca.pem') dc = api.datacenters.get(name="Default") h = api.hosts.get(name="myhost") s = params.Storage(address="_IP_address_", path="/storage/iso", type_="nfs") sd_params = params.StorageDomain(name="myiso", data_center=dc, host=h, type_="iso", storage_format="v3", storage=s) try: sd = api.storagedomains.add(sd_params) print("Storage Domain '%s' added (%s)." % (sd.get_name(), sd.get_id())) api.disconnect() V4 For V4, the add method is used to add the new storage domain and the types class is used to pass the parameters. import ovirtsdk4 as sdk import ovirtsdk4.types as types connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) # Get the reference to the storage domains service: sds_service = connection.system_service().storage_domains_service() # Use the "add" method to create a new NFS storage domain: sd = sds_service.add( types.StorageDomain( name='myiso', description='My ISO', type=types.StorageDomainType.ISO, host=types.Host( name='myhost', ), storage=types.HostStorage( type=types.StorageType.NFS, address='FQDN', path='/nfs/ovirt/path/to/myiso', ), ), ) # Wait until the storage domain is unattached: sd_service = sds_service.storage_domain_service(sd.id) while True: time.sleep(5) sd = sd_service.get() if sd.status == types.StorageDomainStatus.UNATTACHED: break print("Storage Domain '%s' added (%s)." % (sd.name(), sd.id())) # Close the connection to the server: connection.close() If the add method call is successful, the examples output the text: 3.10. Attaching a Storage Domain to a Data Center Once you have added a storage domain to Red Hat Virtualization, you must attach it to a data center and activate it before it will be ready for use. Example 3.8. Attaching a storage domain to a data center These examples attach an existing NFS storage domain, mydata , to the an existing data center, Default . The attach action is facilitated by the add method of the data center's storagedomains collection. These examples may be used to attach both data and ISO storage domains. V3 from ovirtsdk.api import API from ovirtsdk.xml import params try: api = API (url='https://engine.example.com', username='admin@internal', password='password', ca_file='ca.pem') dc = api.datacenters.get(name="Default") sd_data = api.storagedomains.get(name="mydata") try: dc_sd = dc.storagedomains.add(sd_data) print("Attached data storage domain '%s' to data center '%s' (Status: %s)." % (dc_sd.get_name(), dc.get_name(), dc_sd.get_status.get_state())) api.disconnect() V4 import ovirtsdk4 as sdk import ovirtsdk4.types as types # Create the connection to the server: connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) # Locate the service that manages the storage domains and use it to # search for the storage domain: sds_service = connection.system_service().storage_domains_service() sd = sds_service.list(search='name=mydata')[0] # Locate the service that manages the data centers and use it to # search for the data center: dcs_service = connection.system_service().data_centers_service() dc = dcs_service.list(search='name=Default')[0] # Locate the service that manages the data center where we want to # attach the storage domain: dc_service = dcs_service.data_center_service(dc.id) # Locate the service that manages the storage domains that are attached # to the data centers: attached_sds_service = dc_service.storage_domains_service() # Use the "add" method of service that manages the attached storage # domains to attach it: attached_sds_service.add( types.StorageDomain( id=sd.id, ), ) # Wait until the storage domain is active: attached_sd_service = attached_sds_service.storage_domain_service(sd.id) while True: time.sleep(5) sd = attached_sd_service.get() if sd.status == types.StorageDomainStatus.ACTIVE: break print("Attached data storage domain '%s' to data center '%s' (Status: %s)." % (sd.name(), dc.name(), sd.status.state())) # Close the connection to the server: connection.close() If the calls to the add methods are successful, the examples output the following text: Status: maintenance indicates that the storage domains still need to be activated. 3.11. Activating a Storage Domain Once you have added a storage domain to Red Hat Virtualization and attached it to a data center, you must activate it before it will be ready for use. Example 3.9. Activating a storage domain These examples activate an NFS storage domain, mydata , attached to the data center, Default . The activate action is facilitated by the activate method of the storage domain. V3 from ovirtsdk.api import API from ovirtsdk.xml import params try: api = API (url='https://engine.example.com', username='admin@internal', password='password', ca_file='ca.pem') dc = api.datacenters.get(name="Default") sd_data = dc.storagedomains.get(name="mydata") try: sd_data.activate() print("Activated storage domain '%s' in data center '%s' (Status: %s)." % (sd_data.get_name(), dc.get_name(), sd_data.get_status.get_state())) api.disconnect() V4 import ovirtsdk4 as sdk connection = sdk.Connection url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) # Locate the service that manages the storage domains and use it to # search for the storage domain: sds_service = connection.system_service().storage_domains_service() sd = sds_service.list(search='name=mydata')[0] # Locate the service that manages the data centers and use it to # search for the data center: dcs_service = connection.system_service().data_centers_service() dc = dcs_service.list(search='name=Default')[0] # Locate the service that manages the data center where we want to # attach the storage domain: dc_service = dcs_service.data_center_service(dc.id) # Locate the service that manages the storage domains that are attached # to the data centers: attached_sds_service = dc_service.storage_domains_service() # Activate storage domain: attached_sd_service = attached_sds_service.storage_domain_service(sd.id) attached_sd_service.activate() # Wait until the storage domain is active: while True: time.sleep(5) sd = attached_sd_service.get() if sd.status == types.StorageDomainStatus.ACTIVE: break print("Attached data storage domain '%s' to data center '%s' (Status: %s)." % (sd.name(), dc.name(), sd.status.state())) # Close the connection to the server: connection.close() If the activate requests are successful, the examples output the text: Status: active indicates that the storage domains have been activated. 3.12. Listing Files in an ISO Storage Domain The storagedomains collection contains a files collection that describes the files in a storage domain. Example 3.10. Listing Files in an ISO Storage Domain These examples print a list of the ISO files in each ISO storage domain: V3 from ovirtsdk.api import API from ovirtsdk.xml import params try: api = API (url='https://engine.example.com', username='admin@internal', password='password', ca_file='ca.pem') storage_domains = api.storagedomains.list() for storage_domain in storage_domains: if(storage_domain.get_type() == "iso"): print(storage_domain.get_name() + ":\n") files = storage_domain.files.list() for file in files: print("%s" % file.get_name()) print() api.disconnect() V4 import ovirtsdk4 as sdk import ovirtsdk4.types as types connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) storage_domains_service = connection.system_service().storage_domains_service() storage_domains = storage_domains_service.list() for storage_domain in storage_domains: if(storage_domain.type == types.StorageDomainType.ISO): print(storage_domain.name + ":\n") files = storage_domain.files_service().list() for file in files: print("%s" % file.name + "\n") connection.close() The examples output the text: 3.13. Creating a Virtual Machine Virtual machine creation is performed in several steps. The first step, covered here, is to create the virtual machine object itself. Example 3.11. Creating a virtual machine These examples create a virtual machine, vm1 , with the following requirements: 512 MB of memory, expressed in bytes. Attached to the Default cluster, and therefore the Default data center. Based on the default Blank template. Boots from the virtual hard disk drive. V3 In V3, the virtual machine options are combined into a virtual machine parameter object, before using the add method of the vms collection to create the virtual machine itself. from ovirtsdk.api import API from ovirtsdk.xml import params try: api = API (url='https://engine.example.com', username='admin@internal', password='password', ca_file='ca.pem') vm_name = "vm1" vm_memory = 512*1024*1024 vm_cluster = api.clusters.get(name="Default") vm_template = api.templates.get(name="Blank") vm_os = params.OperatingSystem(boot=[params.Boot(dev="hd")]) vm_params = params.VM(name=vm_name, memory=vm_memory, cluster=vm_cluster, template=vm_template, os=vm_os) try: api.vms.add(vm=vm_params) print("Virtual machine '%s' added." % vm_name) api.disconnect() V4 In V4, the options are added as types , using the add method. import ovirtsdk4 as sdk import ovirtsdk4.types as types connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) # Get the reference to the "vms" service: vms_service = connection.system_service().vms_service() # Use the "add" method to create a new virtual machine: vms_service.add( types.Vm( name='vm1', memory = 512*1024*1024 cluster=types.Cluster( name='Default', ), template=types.Template( name='Blank', ), os=types.OperatingSystem(boot=types.Boot(devices=[types.BootDevice.HD)] ), ) print("Virtual machine '%s' added." % vm.name) # Close the connection to the server: connection.close() If the add request is successful, the examples output the text: 3.14. Creating a Virtual NIC To ensure that a newly created virtual machine has network access, you must create and attach a virtual NIC. Example 3.12. Creating a virtual NIC These examples create a NIC, nic1 , and attach it to a virtual machine, vm1 . The NIC in this example is a virtio network device and attached to the ovirtmgmt management network. V3 In V3, these options are combined into an NIC parameter object, before using the add method of the virtual machine's nics collection to create the NIC. from ovirtsdk.api import API from ovirtsdk.xml import params try: api = API (url='https://engine.example.com', username='admin@internal', password='password', ca_file='ca.pem') vm = api.vms.get(name="vm1") nic_name = "nic1" nic_interface = "virtio" nic_network = api.networks.get(name="ovirtmgmt") nic_params = params.NIC(name=nic_name, interface=nic_interface, network=nic_network) try: nic = vm.nics.add(nic_params) print("Network interface '%s' added to '%s'." % (nic.get_name(), vm.get_name())) api.disconnect() V4 import ovirtsdk4 as sdk import ovirtsdk4.types as types connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) # Locate the virtual machines service and use it to find the virtual # machine: vms_service = connection.system_service().vms_service() vm = vms_service.list(search='name=vm1')[0] # Locate the service that manages the network interface cards of the # virtual machine: nics_service = vms_service.vm_service(vm.id).nics_service() # Use the "add" method of the network interface cards service to add the # new network interface card: nics_service.add( types.Nic( name='nic1', interface='virtio', network='ovirtmgmt', ), ) print("Network interface '%s' added to '%s'." % (nic.name(), vm.name())) connection.close() If the add request is successful, the examples output the text: 3.15. Creating a Virtual Machine Disk To ensure that a newly created virtual machine has access to persistent storage, you must create and attach a disk. Example 3.13. Creating a virtual machine disk These examples create an 8 GB virtio disk and attach it to a virtual machine, vm1 . The disk has the following requirements: Stored on the storage domain named data1 . 8 GB in size. system type disk (as opposed to data ). virtio storage device. COW format. Marked as a usable boot device. V3 In V3, the options are combined into a disk parameter object, before using the add method of the virtual machine's disks collection to create the disk itself. from ovirtsdk.api import API from ovirtsdk.xml import params try: api = API (url='https://engine.example.com', username='admin@internal', password='password', ca_file='ca.pem') vm = api.vms.get(name='vm1') sd = params.StorageDomains(storage_domain=[api.storagedomains.get(name='data1')]) disk_size = 8*1024*1024 disk_type = 'system' disk_interface = 'virtio' disk_format = 'cow' disk_bootable = True disk_params = params.Disk(storage_domains=sd, size=disk_size, type_=disk_type, interface=disk_interface, format=disk_format, bootable=disk_bootable) try: d = vm.disks.add(disk_params) print("Disk '%s' added to '%s'." % (d.get_name(), vm.get_name())) api.disconnect() V4 import ovirtsdk4 as sdk import ovirtsdk4.types as types connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) # Locate the virtual machines service and use it to find the virtual # machine: vms_service = connection.system_service().vms_service() vm = vms_service.list(search='name=vm1')[0] # Locate the service that manages the disk attachments of the virtual # machine: disk_attachments_service = vms_service.vm_service(vm.id).disk_attachments_service() # Use the "add" method of the disk attachments service to add the disk. # Note that the size of the disk, the `provisioned_size` attribute, is # specified in bytes, so to create a disk of 10 GiB the value should # be 10 * 2^30. disk_attachment = disk_attachments_service.add( types.DiskAttachment( disk=types.Disk( format=types.DiskFormat.COW, provisioned_size=8*1024*1024, storage_domains=[ types.StorageDomain( name='data1', ), ], ), interface=types.DiskInterface.VIRTIO, bootable=True, active=True, ), ) # Wait until the disk status is OK: disks_service = connection.system_service().disks_service() disk_service = disks_service.disk_service(disk_attachment.disk.id) while True: time.sleep(5) disk = disk_service.get() if disk.status == types.DiskStatus.OK: break print("Disk '%s' added to '%s'." % (disk.name(), vm.name())) # Close the connection to the server: connection.close() If the add request is successful, the examples output the text: 3.16. Attaching an ISO Image to a Virtual Machine To install a guest operating system on a newly created virtual machine, you must attach an ISO file containing the operating system installation media. To locate the ISO file, see Section 3.12, "Listing Files in an ISO Storage Domain" . Example 3.14. Attaching an ISO image to a virtual machine These examples attach my_iso_file.iso to the vm1 virtual machine, using the add method of the virtual machine's cdroms collection. V3 from ovirtsdk.api import API from ovirtsdk.xml import params try: api = API(url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem') sd = api.storagedomains.get(name='iso1') cd_iso = sd.files.get(name='my_iso_file.iso') cd_vm = api.vms.get(name='vm1') cd_params = params.CdRom(file=cd_iso) try: cd_vm.cdroms.add(cd_params) print("Attached CD to '%s'." % cd_vm.get_name()) api.disconnect() V4 import ovirtsdk4 as sdk import ovirtsdk4.types as types connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) # Get the reference to the "vms" service: vms_service = connection.system_service().vms_service() # Find the virtual machine: vm = vms_service.list(search='name=vm1')[0] # Locate the service that manages the virtual machine: vm_service = vms_service.vm_service(vm.id) # Locate the service that manages the CDROM devices of the virtual machine: cdroms_service = vm_service.cdroms_service() # Get the first CDROM: cdrom = cdroms_service.list()[0] # Locate the service that manages the CDROM device found in step: cdrom_service = cdroms_service.cdrom_service(cdrom.id) # Change the CD of the VM to 'my_iso_file.iso'. By default the # operation permanently changes the disk that is visible to the # virtual machine after the boot, but has no effect # on the currently running virtual machine. If you want to change the # disk that is visible to the current running virtual machine, change # the `current` parameter's value to `True`. cdrom_service.update( cdrom=types.Cdrom( file=types.File( id='my_iso_file.iso' ), ), current=False, ) print("Attached CD to '%s'." % vm.name()) # Close the connection to the server: connection.close() If the add request is successful, the examples output the text: Example 3.15. Ejecting a cdrom from a virtual machine These examples eject an ISO image from a virtual machine's cdrom collection. V3 from ovirtsdk.api import API from ovirtsdk.xml import params try: api = API(url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem') sd = api.storagedomains.get(name='iso1') vm = api.vms.get(name='vm1') try: vm.cdroms.get(id='00000000-0000-0000-0000-000000000000').delete() print("Removed CD from '%s'." % vm.get_name()) api.disconnect() V4 import ovirtsdk4 as sdk import ovirtsdk4.types as types connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) # Get the reference to the "vms" service: vms_service = connection.system_service().vms_service() # Find the virtual machine: vm = vms_service.list(search='name=vm1')[0] # Locate the service that manages the virtual machine: vm_service = vms_service.vm_service(vm.id) # Locate the service that manages the CDROM devices of the VM: cdroms_service = vm_service.cdroms_service() # Get the first found CDROM: cdrom = cdroms_service.list()[0] # Locate the service that manages the CDROM device found in step # of the VM: cdrom_service = cdroms_service.cdrom_service(cdrom.id) cdrom_service.remove() print("Removed CD from '%s'." % vm.name()) connection.close() If the delete or remove request is successful, the examples output the text: 3.17. Detaching a Disk You can detach a disk from a virtual machine. Example 3.16. Detaching a disk V3 from ovirtsdk.api import API from ovirtsdk.xml import params try: api = API(url="https://engine.example.com/ovirt-engine/api", username='admin@internal', password='password', ca_file='ca.pem') vm = api.vms.get(name="VM_NAME") disk = vm.disks.get(name="DISK_NAME") detach = params.Action(detach=True) disk.delete(action=detach) print("Detached disk %s successfully!" % disk) api.disconnect() V4 import ovirtsdk4 as sdk import ovirtsdk4.types as types connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) # Get the reference to the "vms" service: vms_service = connection.system_service().vms_service() # Find the virtual machine: vm = vms_service.list(search='name=vm1')[0] # Locate the service that manages the virtual machine: vm_service = vms_service.vm_service(vm.id) attachments_service = vm_service.disk_attachments_service() attachment = ( (a for a in disk_attachments if a.disk.id == disk.id), None ) # Remove the attachment. The default behavior is that the disk is detached # from the virtual machine, but not deleted from the system. If you wish to # delete the disk, change the detach_only parameter to "False". attachment.remove(detach_only=True) print("Detached disk %s successfully!" % attachment) # Close the connection to the server: connection.close() If the delete or remove request is successful, the examples output the text: === Starting a Virtual Machine You can start a virtual machine. These examples start the virtual machine using the start method. V3 from ovirtsdk.api import API from ovirtsdk.xml import params try: api = API (url='https://engine.example.com', username='admin@internal', password='password', ca_file='ca.pem') vm = api.vms.get(name="vm1") try: vm.start() print("Started '%s'." % vm.get_name()) api.disconnect() V4 import time import ovirtsdk4 as sdk import ovirtsdk4.types as types connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) # Get the reference to the "vms" service: vms_service = connection.system_service().vms_service() # Find the virtual machine: vm = vms_service.list(search='name=vm1')[0] # Locate the service that manages the virtual machine, as that is where # the action methods are defined: vm_service = vms_service.vm_service(vm.id) # Call the "start" method of the service to start it: vm_service.start() # Wait until the virtual machine is up: while True: time.sleep(5) vm = vm_service.get() if vm.status == types.VmStatus.UP: break print("Started '%s'." % vm.name()) # Close the connection to the server: connection.close() If the start request is successful, the examples output the text: The UP status indicates that the virtual machine is running. === Starting a Virtual Machine with Overridden Parameters You can start a virtual machine, overriding its default parameters. These examples boot a virtual machine with a Windows ISO and attach the virtio-win_x86.vfd floppy disk, which contains Windows drivers. This action is equivalent to using the Run Once window in the Administration Portal to start a virtual machine. V3 from ovirtsdk.api import API from ovirtsdk.xml import params try: api = API (url='https://engine.example.com', username='admin@internal', password='password', ca_file='ca.pem') try: vm = api.vms.get(name="vm1") cdrom = params.CdRom(file=params.File(id='windows_example.iso')) floppy = params.Floppy(file=params.File(id='virtio-win_x86.vfd')) try: vm.start( action=params.Action( vm=params.VM( os=params.OperatingSystem( boot=[params.Boot(dev='cdrom')]), cdroms=params.CdRoms(cdrom=[cdrom]), floppies=params.Floppies(floppy=[floppy]) ) ) ) V4 import time import ovirtsdk4 as sdk import ovirtsdk4.types as types connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) # Get the reference to the "vms" service: vms_service = connection.system_service().vms_service() # Find the virtual machine: vm = vms_service.list(search='name=vm1')[0] # Locate the service that manages the virtual machine: vm_service = vms_service.vm_service(vm.id) # Locate the service that manages the CDROM devices of the virtual machine: cdroms_service = vm_service.cdroms_service() # Get the first CDROM: cdrom = cdroms_service.list()[0] # Locate the service that manages the CDROM device found in step: cdrom_service = cdroms_service.cdrom_service(cdrom.id) # Change the CD of the VM to 'windows_example.iso': cdrom_service.update( cdrom=types.Cdrom( file=types.File( id='windows_example.iso' ), ), current=False, ) # Call the "start" method of the service to start it: vm_service.start( vm=types.Vm( os=types.OperatingSystem( boot=types.Boot( devices=[ types.BootDevice.CDROM, ] ) ), ) ) # Wait until the virtual machine's status is "UP": while True: time.sleep(5) vm = vm_service.get() if vm.status == types.VmStatus.UP: break print("Started '%s'." % vm.name()) # Close the connection to the server: connection.close() The CD image and floppy disk file must be available to the virtual machine. See Uploading Images to a Data Storage Domain for details. === Starting a Virtual Machine with Cloud-Init You can start a virtual machine with a specific configuration, using the Cloud-Init tool. These examples show you how to start a virtual machine using the Cloud-Init tool to set a host name and a static IP for the eth0 interface. V3 from ovirtsdk.api import API from ovirtsdk.xml import params try: api = API (url='https://engine.example.com', username='admin@internal', password='password', ca_file='ca.pem') try: vm = api.vms.get(name="vm1") try: vm.start( use_cloud_init=True, action=params.Action( vm=params.VM( initialization=params.Initialization( cloud_init=params.CloudInit( host=params.Host(address="MyHost.example.com"), network_configuration=params.NetworkConfiguration( nics=params.Nics( nic=[params.NIC( name="eth0", boot_protocol="static", on_boot=True, network=params.Network( ip=params.IP( address="10.10.10.1", netmask="255.255.255.0", gateway="10.10.10.1" ) ) ) ] ) ) ) ) ) ) ) V4 import ovirtsdk4 as sdk import ovirtsdk4.types as types connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) # Find the virtual machine: vms_service = connection.system_service().vms_service() vm = vms_service.list(search = 'name=vm1')[0] # Find the service that manages the virtual machine: vm_service = vms_service.vm_service(vm.id) # Start the virtual machine enabling cloud-init and providing the # password for the `root` user and the network configuration: vm_service.start( use_cloud_init=True, vm=types.Vm( initialization=types.Initialization( user_name='root', root_password='password', host_name='MyHost.example.com', nic_configurations=[ types.NicConfiguration( name='eth0', on_boot=True, boot_protocol=types.BootProtocol.STATIC, ip=types.Ip( version=types.IpVersion.V4, address='10.10.10.1', netmask='255.255.255.0', gateway='10.10.10.1' ) ) ) ) ) # Close the connection to the server: connection.close() === Checking System Events Red Hat Virtualization Manager records and logs many system events. These event logs are accessible through the user interface, the system log files, and using the API. The ovirtsdk library exposes events using the events collection. In this example the events collection is listed. The query parameter of the list method is used to ensure that all available pages of results are returned. By default the list method returns only the first page of results, which is 100 records in length. The returned list is sorted in reverse chronological order, to display the events in the order in which they occurred. V3 from ovirtsdk.api import API from ovirtsdk.xml import params api = API (url='https://engine.example.com', username='admin@internal', password='password', ca_file='ca.pem') event_list = [] event_page_index = 1 event_page_current = api.events.list(query="page %s" % event_page_index) while(len(event_page_current) != 0): event_list = event_list + event_page_current event_page_index = event_page_index + 1 try: event_page_current = api.events.list(query="page %s" % event_page_index) event_list.reverse() for event in event_list: print("%s %s CODE %s - %s" % (event.get_time(), event.get_severity().upper(), event.get_code(), event.get_description())) V4 import ovirtsdk4 as sdk import ovirtsdk4.types as types connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) # Find the service that manages the collection of events: events_service = connection.system_service().events_service() page_number = 1 events = events_service.list(search='page %s' % page_number) while events: for event in events: print( "%s %s CODE %s - %s" % ( event.time, event.severity, event.code, event.description, ) ) page_number = page_number + 1 events = events_service.list(search='page %s' % page_number) # Close the connection to the server: connection.close() These examples output events in the following format:
[ "pydoc ovirtsdk.infrastructure.errors", "from ovirtsdk.api import API", "from ovirtsdk.api import API api = API ( url=\"https://engine.example.com\", username=\"admin@internal\", password=\"password\", ca_file=\"ca.crt\") api.test() print(\"Connected successfully!\") api.disconnect()", "pydoc ovirtsdk.api", "import ovirtsdk4 as sdk", "import ovirtsdk4 as sdk Create a connection to the server: connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) connection.test() print(\"Connected successfully!\") connection.close()", "pydoc ovirtsdk.api", "from ovirtsdk.api import API from ovirtsdk.xml import params try: api = API (url='https://engine.example.com', username='admin@internal', password='password', ca_file='ca.pem') dc_list = api.datacenters.list() for dc in dc_list: print(\"%s (%s)\" % (dc.get_name(), dc.get_id())) api.disconnect()", "import ovirtsdk4 as sdk import ovirtsdk4.types as types connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) dcs_service = connection.system_service().dcs_service() dcs = dcs_service.list() for dc in dcs: print(\"%s (%s)\" % (dc.name, dc.id)) connection.close()", "Default (00000000-0000-0000-0000-000000000000)", "from ovirtsdk.api import API from ovirtsdk.xml import params try: api = API (url='https://engine.example.com', username='admin@internal', password='password', ca_file='ca.pem') c_list = api.clusters.list() for c in c_list: print(\"%s (%s)\" % (c.get_name(), c.get_id())) api.disconnect()", "import ovirtsdk4 as sdk import ovirtsdk4.types as types connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) cls_service = connection.system_service().clusters_service() cls = cls_service.list() for cl in cls: print(\"%s (%s)\" % (cl.name, cl.id)) connection.close()", "Default (00000000-0000-0000-0000-000000000000)", "from ovirtsdk.api import API from ovirtsdk.xml import params try: api = API(url=\"https://engine.example.com/ovirt-engine/api\", username='admin@internal', password='password', ca_file='ca.pem') h_list = api.hosts.list() for h in h_list: print(\"%s (%s)\" % (h.get_name(), h.get_id())) api.disconnect()", "import ovirtsdk4 as sdk import ovirtsdk4.types as types connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) host_service = connection.system_service().hosts_service() hosts = host_service.list() for host in hosts: print(\"%s (%s)\" % (host.name, host.id)) connection.close()", "MyHost (00000000-0000-0000-0000-000000000000)", "from ovirtsdk.api import API from ovirtsdk.xml import params try: api = API(url=\"https://engine.example.com/ovirt-engine/api\", username='admin@internal', password='password', ca_file='ca.pem') n_list = api.networks.list() for n in n_list: print(\"%s (%s)\" % (n.get_name(), n.get_id())) api.disconnect()", "import ovirtsdk4 as sdk import ovirtsdk4.types as types connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) nws_service = connection.system_service().networks_service() nws = nws_service.list() for nw in nws: print(\"%s (%s)\" % (nw.name, nw.id)) connection.close()", "ovirtmgmt (00000000-0000-0000-0000-000000000000)", "from ovirtsdk.api import API from ovirtsdk.xml import params try: api = API (url='https://engine.example.com', username='admin@internal', password='password', ca_file='ca.pem') virtual_machines = api.vms.list() if len(virtual_machines) > 0: print(\"%-30s %s\" % (\"Name\",\"Disk Size\")) print(\"==================================================\") for virtual_machine in virtual_machines: disks = virtual_machine.disks.list() disk_size = 0 for disk in disks: disk_size += disk.get_size() print(\"%-30s: %d\" % (virtual_machine.get_name(), disk_size)) api.disconnect()", "import ovirtsdk4 as sdk import ovirtsdk4.types as types connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) vms_service = connection.system_service().vms_service() virtual_machines = vms_service.list() if len(virtual_machines) > 0: print(\"%-30s %s\" % (\"Name\", \"Disk Size\")) print(\"==================================================\") for virtual_machine in virtual_machines: vm_service = vms_service.vm_service(virtual_machine.id) disk_attachments = vm_service.disk_attachments_service().list() disk_size = 0 for disk_attachment in disk_attachments: disk = connection.follow_link(disk_attachment.disk) disk_size += disk.provisioned_size print(\"%-30s: %d\" % (virtual_machine.name, disk_size))", "Name Disk Size ================================================== vm1 50000000000", "dc = api.datacenters.get(name=\"Default\")", "h = api.hosts.get(name=\"myhost\")", "s = params.Storage(address=\"_IP_address_\", path=\"/storage/data\", type_=\"nfs\")", "from ovirtsdk.api import API from ovirtsdk.xml import params try: api = API (url='https://engine.example.com', username='admin@internal', password='password', ca_file='ca.pem') dc = api.datacenters.get(name=\"Default\") h = api.hosts.get(name=\"myhost\") s = params.Storage(address=\"_IP_address_\", path=\"/storage/data\", type_=\"nfs\") sd_params = params.StorageDomain(name=\"mydata\", data_center=dc, host=h, type_=\"data\", storage_format=\"v3\", storage=s) try: sd = api.storagedomains.add(sd_params) print(\"Storage Domain '%s' added (%s).\" % (sd.get_name(), sd.get_id())) api.disconnect()", "import ovirtsdk4 as sdk import ovirtsdk4.types as types Create the connection to the server: connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) Get the reference to the storage domains service: sds_service = connection.system_service().storage_domains_service() Create a new NFS storage domain: sd = sds_service.add( types.StorageDomain( name='mydata', description='My data', type=types.StorageDomainType.DATA, host=types.Host( name='myhost', ), storage=types.HostStorage( type=types.StorageType.NFS, address='_FQDN_', path='/nfs/ovirt/path/to/mydata', ), ), ) Wait until the storage domain is unattached: sd_service = sds_service.storage_domain_service(sd.id) while True: time.sleep(5) sd = sd_service.get() if sd.status == types.StorageDomainStatus.UNATTACHED: break print(\"Storage Domain '%s' added (%s).\" % (sd.name(), sd.id())) connection.close()", "Storage Domain 'mydata' added (00000000-0000-0000-0000-000000000000).", "dc = api.datacenters.get( name=\"Default\" )", "h = api.hosts.get(name=\"myhost\")", "s = params.Storage(address=\"_IP_address_\", path=\"/storage/iso\", type_=\"nfs\")", "from ovirtsdk.api import API from ovirtsdk.xml import params try: api = API (url='https://engine.example.com', username='admin@internal', password='password', ca_file='ca.pem') dc = api.datacenters.get(name=\"Default\") h = api.hosts.get(name=\"myhost\") s = params.Storage(address=\"_IP_address_\", path=\"/storage/iso\", type_=\"nfs\") sd_params = params.StorageDomain(name=\"myiso\", data_center=dc, host=h, type_=\"iso\", storage_format=\"v3\", storage=s) try: sd = api.storagedomains.add(sd_params) print(\"Storage Domain '%s' added (%s).\" % (sd.get_name(), sd.get_id())) api.disconnect()", "import ovirtsdk4 as sdk import ovirtsdk4.types as types connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) Get the reference to the storage domains service: sds_service = connection.system_service().storage_domains_service() Use the \"add\" method to create a new NFS storage domain: sd = sds_service.add( types.StorageDomain( name='myiso', description='My ISO', type=types.StorageDomainType.ISO, host=types.Host( name='myhost', ), storage=types.HostStorage( type=types.StorageType.NFS, address='FQDN', path='/nfs/ovirt/path/to/myiso', ), ), ) Wait until the storage domain is unattached: sd_service = sds_service.storage_domain_service(sd.id) while True: time.sleep(5) sd = sd_service.get() if sd.status == types.StorageDomainStatus.UNATTACHED: break print(\"Storage Domain '%s' added (%s).\" % (sd.name(), sd.id())) Close the connection to the server: connection.close()", "Storage Domain 'myiso' added (00000000-0000-0000-0000-000000000000).", "from ovirtsdk.api import API from ovirtsdk.xml import params try: api = API (url='https://engine.example.com', username='admin@internal', password='password', ca_file='ca.pem') dc = api.datacenters.get(name=\"Default\") sd_data = api.storagedomains.get(name=\"mydata\") try: dc_sd = dc.storagedomains.add(sd_data) print(\"Attached data storage domain '%s' to data center '%s' (Status: %s).\" % (dc_sd.get_name(), dc.get_name(), dc_sd.get_status.get_state())) api.disconnect()", "import ovirtsdk4 as sdk import ovirtsdk4.types as types Create the connection to the server: connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) Locate the service that manages the storage domains and use it to search for the storage domain: sds_service = connection.system_service().storage_domains_service() sd = sds_service.list(search='name=mydata')[0] Locate the service that manages the data centers and use it to search for the data center: dcs_service = connection.system_service().data_centers_service() dc = dcs_service.list(search='name=Default')[0] Locate the service that manages the data center where we want to attach the storage domain: dc_service = dcs_service.data_center_service(dc.id) Locate the service that manages the storage domains that are attached to the data centers: attached_sds_service = dc_service.storage_domains_service() Use the \"add\" method of service that manages the attached storage domains to attach it: attached_sds_service.add( types.StorageDomain( id=sd.id, ), ) Wait until the storage domain is active: attached_sd_service = attached_sds_service.storage_domain_service(sd.id) while True: time.sleep(5) sd = attached_sd_service.get() if sd.status == types.StorageDomainStatus.ACTIVE: break print(\"Attached data storage domain '%s' to data center '%s' (Status: %s).\" % (sd.name(), dc.name(), sd.status.state())) Close the connection to the server: connection.close()", "Attached data storage domain 'data1' to data center 'Default' (Status: maintenance).", "from ovirtsdk.api import API from ovirtsdk.xml import params try: api = API (url='https://engine.example.com', username='admin@internal', password='password', ca_file='ca.pem') dc = api.datacenters.get(name=\"Default\") sd_data = dc.storagedomains.get(name=\"mydata\") try: sd_data.activate() print(\"Activated storage domain '%s' in data center '%s' (Status: %s).\" % (sd_data.get_name(), dc.get_name(), sd_data.get_status.get_state())) api.disconnect()", "import ovirtsdk4 as sdk connection = sdk.Connection url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) Locate the service that manages the storage domains and use it to search for the storage domain: sds_service = connection.system_service().storage_domains_service() sd = sds_service.list(search='name=mydata')[0] Locate the service that manages the data centers and use it to search for the data center: dcs_service = connection.system_service().data_centers_service() dc = dcs_service.list(search='name=Default')[0] Locate the service that manages the data center where we want to attach the storage domain: dc_service = dcs_service.data_center_service(dc.id) Locate the service that manages the storage domains that are attached to the data centers: attached_sds_service = dc_service.storage_domains_service() Activate storage domain: attached_sd_service = attached_sds_service.storage_domain_service(sd.id) attached_sd_service.activate() Wait until the storage domain is active: while True: time.sleep(5) sd = attached_sd_service.get() if sd.status == types.StorageDomainStatus.ACTIVE: break print(\"Attached data storage domain '%s' to data center '%s' (Status: %s).\" % (sd.name(), dc.name(), sd.status.state())) Close the connection to the server: connection.close()", "Activated storage domain 'mydata' in data center 'Default' (Status: active).", "from ovirtsdk.api import API from ovirtsdk.xml import params try: api = API (url='https://engine.example.com', username='admin@internal', password='password', ca_file='ca.pem') storage_domains = api.storagedomains.list() for storage_domain in storage_domains: if(storage_domain.get_type() == \"iso\"): print(storage_domain.get_name() + \":\\n\") files = storage_domain.files.list() for file in files: print(\"%s\" % file.get_name()) print() api.disconnect()", "import ovirtsdk4 as sdk import ovirtsdk4.types as types connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) storage_domains_service = connection.system_service().storage_domains_service() storage_domains = storage_domains_service.list() for storage_domain in storage_domains: if(storage_domain.type == types.StorageDomainType.ISO): print(storage_domain.name + \":\\n\") files = storage_domain.files_service().list() for file in files: print(\"%s\" % file.name + \"\\n\") connection.close()", "ISO_storage_domain: file1 file2", "from ovirtsdk.api import API from ovirtsdk.xml import params try: api = API (url='https://engine.example.com', username='admin@internal', password='password', ca_file='ca.pem') vm_name = \"vm1\" vm_memory = 512*1024*1024 vm_cluster = api.clusters.get(name=\"Default\") vm_template = api.templates.get(name=\"Blank\") vm_os = params.OperatingSystem(boot=[params.Boot(dev=\"hd\")]) vm_params = params.VM(name=vm_name, memory=vm_memory, cluster=vm_cluster, template=vm_template, os=vm_os) try: api.vms.add(vm=vm_params) print(\"Virtual machine '%s' added.\" % vm_name) api.disconnect()", "import ovirtsdk4 as sdk import ovirtsdk4.types as types connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) Get the reference to the \"vms\" service: vms_service = connection.system_service().vms_service() Use the \"add\" method to create a new virtual machine: vms_service.add( types.Vm( name='vm1', memory = 512*1024*1024 cluster=types.Cluster( name='Default', ), template=types.Template( name='Blank', ), os=types.OperatingSystem(boot=types.Boot(devices=[types.BootDevice.HD)] ), ) print(\"Virtual machine '%s' added.\" % vm.name) Close the connection to the server: connection.close()", "Virtual machine 'vm1' added.", "from ovirtsdk.api import API from ovirtsdk.xml import params try: api = API (url='https://engine.example.com', username='admin@internal', password='password', ca_file='ca.pem') vm = api.vms.get(name=\"vm1\") nic_name = \"nic1\" nic_interface = \"virtio\" nic_network = api.networks.get(name=\"ovirtmgmt\") nic_params = params.NIC(name=nic_name, interface=nic_interface, network=nic_network) try: nic = vm.nics.add(nic_params) print(\"Network interface '%s' added to '%s'.\" % (nic.get_name(), vm.get_name())) api.disconnect()", "import ovirtsdk4 as sdk import ovirtsdk4.types as types connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) Locate the virtual machines service and use it to find the virtual machine: vms_service = connection.system_service().vms_service() vm = vms_service.list(search='name=vm1')[0] Locate the service that manages the network interface cards of the virtual machine: nics_service = vms_service.vm_service(vm.id).nics_service() Use the \"add\" method of the network interface cards service to add the new network interface card: nics_service.add( types.Nic( name='nic1', interface='virtio', network='ovirtmgmt', ), ) print(\"Network interface '%s' added to '%s'.\" % (nic.name(), vm.name())) connection.close()", "Network interface 'nic1' added to 'vm1'.", "from ovirtsdk.api import API from ovirtsdk.xml import params try: api = API (url='https://engine.example.com', username='admin@internal', password='password', ca_file='ca.pem') vm = api.vms.get(name='vm1') sd = params.StorageDomains(storage_domain=[api.storagedomains.get(name='data1')]) disk_size = 8*1024*1024 disk_type = 'system' disk_interface = 'virtio' disk_format = 'cow' disk_bootable = True disk_params = params.Disk(storage_domains=sd, size=disk_size, type_=disk_type, interface=disk_interface, format=disk_format, bootable=disk_bootable) try: d = vm.disks.add(disk_params) print(\"Disk '%s' added to '%s'.\" % (d.get_name(), vm.get_name())) api.disconnect()", "import ovirtsdk4 as sdk import ovirtsdk4.types as types connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) Locate the virtual machines service and use it to find the virtual machine: vms_service = connection.system_service().vms_service() vm = vms_service.list(search='name=vm1')[0] Locate the service that manages the disk attachments of the virtual machine: disk_attachments_service = vms_service.vm_service(vm.id).disk_attachments_service() Use the \"add\" method of the disk attachments service to add the disk. Note that the size of the disk, the `provisioned_size` attribute, is specified in bytes, so to create a disk of 10 GiB the value should be 10 * 2^30. disk_attachment = disk_attachments_service.add( types.DiskAttachment( disk=types.Disk( format=types.DiskFormat.COW, provisioned_size=8*1024*1024, storage_domains=[ types.StorageDomain( name='data1', ), ], ), interface=types.DiskInterface.VIRTIO, bootable=True, active=True, ), ) Wait until the disk status is OK: disks_service = connection.system_service().disks_service() disk_service = disks_service.disk_service(disk_attachment.disk.id) while True: time.sleep(5) disk = disk_service.get() if disk.status == types.DiskStatus.OK: break print(\"Disk '%s' added to '%s'.\" % (disk.name(), vm.name())) Close the connection to the server: connection.close()", "Disk 'vm1_Disk1' added to 'vm1'.", "from ovirtsdk.api import API from ovirtsdk.xml import params try: api = API(url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem') sd = api.storagedomains.get(name='iso1') cd_iso = sd.files.get(name='my_iso_file.iso') cd_vm = api.vms.get(name='vm1') cd_params = params.CdRom(file=cd_iso) try: cd_vm.cdroms.add(cd_params) print(\"Attached CD to '%s'.\" % cd_vm.get_name()) api.disconnect()", "import ovirtsdk4 as sdk import ovirtsdk4.types as types connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) Get the reference to the \"vms\" service: vms_service = connection.system_service().vms_service() Find the virtual machine: vm = vms_service.list(search='name=vm1')[0] Locate the service that manages the virtual machine: vm_service = vms_service.vm_service(vm.id) Locate the service that manages the CDROM devices of the virtual machine: cdroms_service = vm_service.cdroms_service() Get the first CDROM: cdrom = cdroms_service.list()[0] Locate the service that manages the CDROM device found in previous step: cdrom_service = cdroms_service.cdrom_service(cdrom.id) Change the CD of the VM to 'my_iso_file.iso'. By default the operation permanently changes the disk that is visible to the virtual machine after the next boot, but has no effect on the currently running virtual machine. If you want to change the disk that is visible to the current running virtual machine, change the `current` parameter's value to `True`. cdrom_service.update( cdrom=types.Cdrom( file=types.File( id='my_iso_file.iso' ), ), current=False, ) print(\"Attached CD to '%s'.\" % vm.name()) Close the connection to the server: connection.close()", "Attached CD to 'vm1'.", "from ovirtsdk.api import API from ovirtsdk.xml import params try: api = API(url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem') sd = api.storagedomains.get(name='iso1') vm = api.vms.get(name='vm1') try: vm.cdroms.get(id='00000000-0000-0000-0000-000000000000').delete() print(\"Removed CD from '%s'.\" % vm.get_name()) api.disconnect()", "import ovirtsdk4 as sdk import ovirtsdk4.types as types connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) Get the reference to the \"vms\" service: vms_service = connection.system_service().vms_service() Find the virtual machine: vm = vms_service.list(search='name=vm1')[0] Locate the service that manages the virtual machine: vm_service = vms_service.vm_service(vm.id) Locate the service that manages the CDROM devices of the VM: cdroms_service = vm_service.cdroms_service() Get the first found CDROM: cdrom = cdroms_service.list()[0] Locate the service that manages the CDROM device found in previous step of the VM: cdrom_service = cdroms_service.cdrom_service(cdrom.id) cdrom_service.remove() print(\"Removed CD from '%s'.\" % vm.name()) connection.close()", "Removed CD from 'vm1'.", "from ovirtsdk.api import API from ovirtsdk.xml import params try: api = API(url=\"https://engine.example.com/ovirt-engine/api\", username='admin@internal', password='password', ca_file='ca.pem') vm = api.vms.get(name=\"VM_NAME\") disk = vm.disks.get(name=\"DISK_NAME\") detach = params.Action(detach=True) disk.delete(action=detach) print(\"Detached disk %s successfully!\" % disk) api.disconnect()", "import ovirtsdk4 as sdk import ovirtsdk4.types as types connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) Get the reference to the \"vms\" service: vms_service = connection.system_service().vms_service() Find the virtual machine: vm = vms_service.list(search='name=vm1')[0] Locate the service that manages the virtual machine: vm_service = vms_service.vm_service(vm.id) attachments_service = vm_service.disk_attachments_service() attachment = next( (a for a in disk_attachments if a.disk.id == disk.id), None ) Remove the attachment. The default behavior is that the disk is detached from the virtual machine, but not deleted from the system. If you wish to delete the disk, change the detach_only parameter to \"False\". attachment.remove(detach_only=True) print(\"Detached disk %s successfully!\" % attachment) Close the connection to the server: connection.close()", "Detached disk vm1_disk1 successfully!", "from ovirtsdk.api import API from ovirtsdk.xml import params try: api = API (url='https://engine.example.com', username='admin@internal', password='password', ca_file='ca.pem') vm = api.vms.get(name=\"vm1\") try: vm.start() print(\"Started '%s'.\" % vm.get_name()) api.disconnect()", "import time import ovirtsdk4 as sdk import ovirtsdk4.types as types connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) Get the reference to the \"vms\" service: vms_service = connection.system_service().vms_service() Find the virtual machine: vm = vms_service.list(search='name=vm1')[0] Locate the service that manages the virtual machine, as that is where the action methods are defined: vm_service = vms_service.vm_service(vm.id) Call the \"start\" method of the service to start it: vm_service.start() Wait until the virtual machine is up: while True: time.sleep(5) vm = vm_service.get() if vm.status == types.VmStatus.UP: break print(\"Started '%s'.\" % vm.name()) Close the connection to the server: connection.close()", "Started 'vm1'.", "from ovirtsdk.api import API from ovirtsdk.xml import params try: api = API (url='https://engine.example.com', username='admin@internal', password='password', ca_file='ca.pem') try: vm = api.vms.get(name=\"vm1\") cdrom = params.CdRom(file=params.File(id='windows_example.iso')) floppy = params.Floppy(file=params.File(id='virtio-win_x86.vfd')) try: vm.start( action=params.Action( vm=params.VM( os=params.OperatingSystem( boot=[params.Boot(dev='cdrom')]), cdroms=params.CdRoms(cdrom=[cdrom]), floppies=params.Floppies(floppy=[floppy]) ) ) )", "import time import ovirtsdk4 as sdk import ovirtsdk4.types as types connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) Get the reference to the \"vms\" service: vms_service = connection.system_service().vms_service() Find the virtual machine: vm = vms_service.list(search='name=vm1')[0] Locate the service that manages the virtual machine: vm_service = vms_service.vm_service(vm.id) Locate the service that manages the CDROM devices of the virtual machine: cdroms_service = vm_service.cdroms_service() Get the first CDROM: cdrom = cdroms_service.list()[0] Locate the service that manages the CDROM device found in previous step: cdrom_service = cdroms_service.cdrom_service(cdrom.id) Change the CD of the VM to 'windows_example.iso': cdrom_service.update( cdrom=types.Cdrom( file=types.File( id='windows_example.iso' ), ), current=False, ) Call the \"start\" method of the service to start it: vm_service.start( vm=types.Vm( os=types.OperatingSystem( boot=types.Boot( devices=[ types.BootDevice.CDROM, ] ) ), ) ) Wait until the virtual machine's status is \"UP\": while True: time.sleep(5) vm = vm_service.get() if vm.status == types.VmStatus.UP: break print(\"Started '%s'.\" % vm.name()) Close the connection to the server: connection.close()", "from ovirtsdk.api import API from ovirtsdk.xml import params try: api = API (url='https://engine.example.com', username='admin@internal', password='password', ca_file='ca.pem') try: vm = api.vms.get(name=\"vm1\") try: vm.start( use_cloud_init=True, action=params.Action( vm=params.VM( initialization=params.Initialization( cloud_init=params.CloudInit( host=params.Host(address=\"MyHost.example.com\"), network_configuration=params.NetworkConfiguration( nics=params.Nics( nic=[params.NIC( name=\"eth0\", boot_protocol=\"static\", on_boot=True, network=params.Network( ip=params.IP( address=\"10.10.10.1\", netmask=\"255.255.255.0\", gateway=\"10.10.10.1\" ) ) ) ] ) ) ) ) ) ) )", "import ovirtsdk4 as sdk import ovirtsdk4.types as types connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) Find the virtual machine: vms_service = connection.system_service().vms_service() vm = vms_service.list(search = 'name=vm1')[0] Find the service that manages the virtual machine: vm_service = vms_service.vm_service(vm.id) Start the virtual machine enabling cloud-init and providing the password for the `root` user and the network configuration: vm_service.start( use_cloud_init=True, vm=types.Vm( initialization=types.Initialization( user_name='root', root_password='password', host_name='MyHost.example.com', nic_configurations=[ types.NicConfiguration( name='eth0', on_boot=True, boot_protocol=types.BootProtocol.STATIC, ip=types.Ip( version=types.IpVersion.V4, address='10.10.10.1', netmask='255.255.255.0', gateway='10.10.10.1' ) ) ) ) ) Close the connection to the server: connection.close()", "from ovirtsdk.api import API from ovirtsdk.xml import params api = API (url='https://engine.example.com', username='admin@internal', password='password', ca_file='ca.pem') event_list = [] event_page_index = 1 event_page_current = api.events.list(query=\"page %s\" % event_page_index) while(len(event_page_current) != 0): event_list = event_list + event_page_current event_page_index = event_page_index + 1 try: event_page_current = api.events.list(query=\"page %s\" % event_page_index) event_list.reverse() for event in event_list: print(\"%s %s CODE %s - %s\" % (event.get_time(), event.get_severity().upper(), event.get_code(), event.get_description()))", "import ovirtsdk4 as sdk import ovirtsdk4.types as types connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) Find the service that manages the collection of events: events_service = connection.system_service().events_service() page_number = 1 events = events_service.list(search='page %s' % page_number) while events: for event in events: print( \"%s %s CODE %s - %s\" % ( event.time, event.severity, event.code, event.description, ) ) page_number = page_number + 1 events = events_service.list(search='page %s' % page_number) Close the connection to the server: connection.close()", "YYYY-MM-DD_T_HH:MM:SS NORMAL CODE 30 - User admin@internal logged in. YYYY-MM-DD_T_HH:MM:SS NORMAL CODE 153 - VM vm1 was started by admin@internal (Host: MyHost). YYYY-MM-DD_T_HH:MM:SS NORMAL CODE 30 - User admin@internal logged in." ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/python_sdk_guide/chap-python_examples
30.5. Key Affinity Service
30.5. Key Affinity Service The key affinity service allows a value to be placed in a certain node in a distributed Red Hat JBoss Data Grid cluster. The service returns a key that is hashed to a particular node based on a supplied cluster address identifying it. The keys returned by the key affinity service cannot hold any meaning, such as a username. These are only random identifiers that are used throughout the application for this record. The provided key generators do not guarantee that the keys returned by this service are unique. For custom key format, you can pass your own implementation of KeyGenerator. The following is an example of how to obtain and use a reference to this service. Example 30.2. Key Affinity Service The following procedure is an explanation of the provided example. Procedure 30.3. Using the Key Affinity Service Obtain a reference to a cache manager and cache. This starts the service, then uses the supplied Executor to generate and queue keys. Obtain a key from the service which will be mapped to the local node ( cacheManager.getAddress() returns the local address). The entry with a key obtained from the KeyAffinityService is always stored on the node with the provided address. In this case, it is the local node. Report a bug 30.5.1. Lifecycle KeyAffinityService extends Lifecycle , which allows the key affinity service to be stopped, started, and restarted. Example 30.3. Key Affinity Service Lifecycle Parameter The service is instantiated through the KeyAffinityServiceFactory . All factory methods have an Executor , that is used for asynchronous key generation, so that this does not occur in the caller's thread. The user controls the shutting down of this Executor . The KeyAffinityService must be explicitly stopped when it is no longer required. This stops the background key generation, and releases other held resources. The KeyAffinityServce will only stop itself when the cache manager with which it is registered is shut down. Report a bug 30.5.2. Topology Changes KeyAffinityService key ownership may change when a topology change occurs. The key affinity service monitors topology changes and updates so that it doesn't return stale keys, or keys that would map to a different node than the one specified. However, this does not guarantee that a node affinity hasn't changed when a key is used. For example: Thread ( T1 ) reads a key ( K1 ) that maps to a node ( A ). A topology change occurs, resulting in K1 mapping to node B . T1 uses K1 to add something to the cache. At this point, K1 maps to B , a different node to the one requested at the time of read. The above scenario is a not ideal, however it is a supported behavior for the application, as the keys that are already in use may be moved over during cluster change. The KeyAffinityService provides an access proximity optimization for stable clusters, which does not apply during the instability of topology changes. Report a bug
[ "EmbeddedCacheManager cacheManager = getCacheManager(); Cache cache = cacheManager.getCache(); KeyAffinityService keyAffinityService = KeyAffinityServiceFactory.newLocalKeyAffinityService( cache, new RndKeyGenerator(), Executors.newSingleThreadExecutor(), 100); Object localKey = keyAffinityService.getKeyForAddress(cacheManager.getAddress()); cache.put(localKey, \"yourValue\");", "public interface Lifecycle { void start(); void stop(); }" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/sect-key_affinity_service
Chapter 2. Container image requirements
Chapter 2. Container image requirements Products packaged as containers must comply with the following requirements to ensure that container images are: Covered as part of the end-user Red Hat Enterprise Linux support subscription. Scanned to avoid introducing known security vulnerabilities in customer environments. Additional resources Red Hat Container Support Policy 2.1. Platform requirements Requirement Justification Containers must be able to run by using Podman . Allows the administrator to run and manage their containers by using an OCI-compatible, RHEL-integrated command. The podman command supports options similar to those found in the docker command. Containers must be able to be started and stopped by using a Systemd unit file. Allows an administrator to automatically start, stop, and check the status of their containers by using a standard RHEL command. 2.2. Image content requirements Requirement Justification Container images must declare a non-root user unless their functionality requires privileged access. To certify container images requiring root access, you must: Include the requirement in the product documentation. Indicate that the container requires privileged host-level access in the certification project settings. This setting is subject to Red Hat review. Test name: RunAsNonRoot Ensures that containers do not run as the root user unless required. Images running as the root user can pose a security risk. Container images must use a Universal Base Image (UBI) provided by Red Hat. The version of the UBI base image must be supported on the RHEL version undergoing certification. For more information, see the Red Hat Enterpise Linux Container Compatibility Matrix . You can add additional RHEL packages to the UBI images, except for kernel packages. Test name: BasedOnUbi Ensures that application runtime dependencies, such as operating system components and libraries, are covered under the customer's subscription. Container images must not change content provided by Red Hat packages or layers except for files that both you or the customers can change, such as configuration files. Test name: HasModifiedFiles Ensures that Red Hat does not deny support on the basis of unauthorized changes to Red Hat components. Container images must contain a "licenses" directory. Use this directory to add files containing software terms and conditions for your product and any open source software included in the image. Test name: HasLicense Ensures that customers are aware of the terms and conditions applicable to the software included in the image. Uncompressed container images must have less than 40 layers. Test name: LayerCountAcceptable Ensures that images run appropriately on containers. Too many layers could degrade the performance. Container images must not include RHEL kernel packages. Test name: HasNoProhibitedPackages Ensures compliance with RHEL redistribution rules for partners. Container images must not contain Red hat components with identified important or critical vulnerabilities . Test name: N/A . The Red Hat Certification Service conducts this scan. Ensures that customers are not exposed to known vulnerabilities. Container image names must not begin with any Red Hat Marks. Test name: HasProhibitedContainerName Ensures compliance with Red Hat trademark guidelines. Additional resources Red Hat Container Support Policy UBI FAQ's and licensing information UBI images, repositories, and package details 2.3. Image metadata requirements Requirement Justification Container images must include the following labels: name : Image name maintainer : Maintainer name vendor : Company name version : Version of the image release : A number used to identify the specific build for this image summary : A short overview of the application or component in this image description : A long description of the application or component in this image Test name: HasRequiredLabel Ensures that customers can obtain information about the image provider and the content of the images in a consistent way. The image name must follow the Red Hat trademark guidelines. Container images must include a unique tag that is descriptive of the certified image. Red Hat recommends appending the image version and its build date or released date to the unique tag. Floating tags, such as latest although not adequate for certification, can be added to the image in addition to the descriptive tag. Test name: HasUniqueTag Ensures that images can be uniquely identified. Additional resources For more information about container images and Red Hat support, see Red Hat Container Support Policy . For more information about Red Hat base images, see Red Hat Enterprise Linux documentation . 2.4. Image maintenance requirements Partners are responsible for monitoring the health status of their certified containers. When an image rebuild is required because of new functionality or a security update, submit the updated container image for recertification and publication. Partners must keep the application components up-to-date and rebuild their container images periodically.
null
https://docs.redhat.com/en/documentation/red_hat_software_certification/2025/html/red_hat_enterprise_linux_software_certification_policy_guide/assembly_container-requirements_isv-pol-introduction
Chapter 6. Deploying hosted control planes in a disconnected environment
Chapter 6. Deploying hosted control planes in a disconnected environment 6.1. Introduction to hosted control planes in a disconnected environment In the context of hosted control planes, a disconnected environment is an OpenShift Container Platform deployment that is not connected to the internet and that uses hosted control planes as a base. You can deploy hosted control planes in a disconnected environment on bare metal or OpenShift Virtualization. Hosted control planes in disconnected environments function differently than in standalone OpenShift Container Platform: The control plane is in the management cluster. The control plane is where the pods of the hosted control plane are run and managed by the Control Plane Operator. The data plane is in the workers of the hosted cluster. The data plane is where the workloads and other pods run, all managed by the HostedClusterConfig Operator. Depending on where the pods are running, they are affected by the ImageDigestMirrorSet (IDMS) or ImageContentSourcePolicy (ICSP) that is created in the management cluster or by the ImageContentSource that is set in the spec field of the manifest for the hosted cluster. The spec field is translated into an IDMS object on the hosted cluster. You can deploy hosted control planes in a disconnected environment on IPv4, IPv6, and dual-stack networks. IPv4 is one of the simplest network configurations to deploy hosted control planes in a disconnected environment. IPv4 ranges require fewer external components than IPv6 or dual-stack setups. For hosted control planes on OpenShift Virtualization in a disconnected environment, use either an IPv4 or a dual-stack network. Important Hosted control planes in a disconnected environment on a dual-stack network is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 6.2. Deploying hosted control planes on OpenShift Virtualization in a disconnected environment When you deploy hosted control planes in a disconnected environment, some of the steps differ depending on the platform you use. The following procedures are specific to deployments on OpenShift Virtualization. 6.2.1. Prerequisites You have a disconnected OpenShift Container Platform environment serving as your management cluster. You have an internal registry to mirror images on. For more information, see About disconnected installation mirroring . 6.2.2. Configuring image mirroring for hosted control planes in a disconnected environment Image mirroring is the process of fetching images from external registries, such as registry.redhat.com or quay.io , and storing them in your private registry. In the following procedures, the oc-mirror tool is used, which is a binary that uses the ImageSetConfiguration object. In the file, you can specify the following information: The OpenShift Container Platform versions to mirror. The versions are in quay.io . The additional Operators to mirror. Select packages individually. The extra images that you want to add to the repository. Prerequisites Ensure that the registry server is running before you start the mirroring process. Procedure To configure image mirroring, complete the following steps: Ensure that your USD{HOME}/.docker/config.json file is updated with the registries that you are going to mirror from and with the private registry that you plan to push the images to. By using the following example, create an ImageSetConfiguration object to use for mirroring. Replace values as needed to match your environment: apiVersion: mirror.openshift.io/v2alpha1 kind: ImageSetConfiguration mirror: platform: channels: - name: candidate-4.18 minVersion: <4.x.y-build> 1 maxVersion: <4.x.y-build> 2 type: ocp kubeVirtContainer: true 3 graph: true additionalImages: 4 - name: quay.io/karmab/origin-keepalived-ipfailover:latest - name: quay.io/karmab/kubectl:latest - name: quay.io/karmab/haproxy:latest - name: quay.io/karmab/mdns-publisher:latest - name: quay.io/karmab/origin-coredns:latest - name: quay.io/karmab/curl:latest - name: quay.io/karmab/kcli:latest - name: quay.io/user-name/trbsht:latest - name: quay.io/user-name/hypershift:BMSelfManage-v4.18 - name: registry.redhat.io/openshift4/ose-kube-rbac-proxy:v4.10 operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.18 packages: - name: lvms-operator - name: local-storage-operator - name: odf-csi-addons-operator - name: odf-operator - name: mcg-operator - name: ocs-operator - name: metallb-operator - name: kubevirt-hyperconverged 5 1 2 Replace <4.x.y-build> with the supported OpenShift Container Platform version you want to use. 3 Set this optional flag to true if you want to also mirror the container disk image for the Red Hat Enterprise Linux CoreOS (RHCOS) boot image for the KubeVirt provider. This flag is available with oc-mirror v2 only. 4 Images specified in the additionalImages field are examples only and are not strictly needed. 5 For deployments that use the KubeVirt provider, include this line. Start the mirroring process by entering the following command: USD oc-mirror --v2 --config imagesetconfig.yaml \ --workspace file://mirror-file docker://<registry> After the mirroring process is finished, you have a new folder named mirror-file , which contains the ImageDigestMirrorSet (IDMS), ImageTagMirrorSet (ITMS), and the catalog sources to apply on the hosted cluster. Mirror the nightly or CI versions of OpenShift Container Platform by configuring the imagesetconfig.yaml file as follows: apiVersion: mirror.openshift.io/v2alpha1 kind: ImageSetConfiguration mirror: platform: graph: true release: registry.ci.openshift.org/ocp/release:<4.x.y-build> 1 kubeVirtContainer: true 2 # ... 1 Replace <4.x.y-build> with the supported OpenShift Container Platform version you want to use. 2 Set this optional flag to true if you want to also mirror the container disk image for the Red Hat Enterprise Linux CoreOS (RHCOS) boot image for the KubeVirt provider. This flag is available with oc-mirror v2 only. If you have a partially disconnected environment, mirror the images from the image set configuration to a registry by entering the following command: USD oc mirror -c imagesetconfig.yaml \ --workspace file://<file_path> docker://<mirror_registry_url> --v2 For more information, see "Mirroring an image set in a partially disconnected environment". If you have a fully disconnected environment, perform the following steps: Mirror the images from the specified image set configuration to the disk by entering the following command: USD oc mirror -c imagesetconfig.yaml file://<file_path> --v2 For more information, see "Mirroring an image set in a fully disconnected environment". Process the image set file on the disk and mirror the contents to a target mirror registry by entering the following command: USD oc mirror -c imagesetconfig.yaml \ --from file://<file_path> docker://<mirror_registry_url> --v2 Mirror the latest multicluster engine Operator images by following the steps in Install on disconnected networks . Additional resources Mirroring an image set in a partially disconnected environment Mirroring an image set in a fully disconnected environment 6.2.3. Applying objects in the management cluster After the mirroring process is complete, you need to apply two objects in the management cluster: ImageContentSourcePolicy (ICSP) or ImageDigestMirrorSet (IDMS) Catalog sources When you use the oc-mirror tool, the output artifacts are in a folder named oc-mirror-workspace/results-XXXXXX/ . The ICSP or IDMS initiates a MachineConfig change that does not restart your nodes but restarts the kubelet on each of them. After the nodes are marked as READY , you need to apply the newly generated catalog sources. The catalog sources initiate actions in the openshift-marketplace Operator, such as downloading the catalog image and processing it to retrieve all the PackageManifests that are included in that image. Procedure To check the new sources, run the following command by using the new CatalogSource as a source: USD oc get packagemanifest To apply the artifacts, complete the following steps: Create the ICSP or IDMS artifacts by entering the following command: USD oc apply -f oc-mirror-workspace/results-XXXXXX/imageContentSourcePolicy.yaml Wait for the nodes to become ready, and then enter the following command: USD oc apply -f catalogSource-XXXXXXXX-index.yaml Mirror the OLM catalogs and configure the hosted cluster to point to the mirror. When you use the management (default) OLMCatalogPlacement mode, the image stream that is used for OLM catalogs is not automatically amended with override information from the ICSP on the management cluster. If the OLM catalogs are properly mirrored to an internal registry by using the original name and tag, add the hypershift.openshift.io/olm-catalogs-is-registry-overrides annotation to the HostedCluster resource. The format is "sr1=dr1,sr2=dr2" , where the source registry string is a key and the destination registry is a value. To bypass the OLM catalog image stream mechanism, use the following four annotations on the HostedCluster resource to directly specify the addresses of the four images to use for OLM Operator catalogs: hypershift.openshift.io/certified-operators-catalog-image hypershift.openshift.io/community-operators-catalog-image hypershift.openshift.io/redhat-marketplace-catalog-image hypershift.openshift.io/redhat-operators-catalog-image In this case, the image stream is not created, and you must update the value of the annotations when the internal mirror is refreshed to pull in Operator updates. steps Deploy the multicluster engine Operator by completing the steps in Deploying multicluster engine Operator for a disconnected installation of hosted control planes . Additional resources Mirroring images for a disconnected installation by using the oc-mirror plugin v2 . 6.2.4. Deploying multicluster engine Operator for a disconnected installation of hosted control planes The multicluster engine for Kubernetes Operator plays a crucial role in deploying clusters across providers. If you do not have multicluster engine Operator installed, review the following documentation to understand the prerequisites and steps to install it: About cluster lifecycle with multicluster engine operator Installing and upgrading multicluster engine operator 6.2.5. Configuring TLS certificates for a disconnected installation of hosted control planes To ensure proper function in a disconnected deployment, you need to configure the registry CA certificates in the management cluster and the worker nodes for the hosted cluster. 6.2.5.1. Adding the registry CA to the management cluster To add the registry CA to the management cluster, complete the following steps. Procedure Create a config map that resembles the following example: apiVersion: v1 kind: ConfigMap metadata: name: <config_map_name> 1 namespace: <config_map_namespace> 2 data: 3 <registry_name>..<port>: | 4 -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- <registry_name>..<port>: | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- <registry_name>..<port>: | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- 1 Specify the name of the config map. 2 Specify the namespace for the config map. 3 In the data field, specify the registry names and the registry certificate content. Replace <port> with the port where the registry server is running; for example, 5000 . 4 Ensure that the data in the config map is defined by using | only instead of other methods, such as | - . If you use other methods, issues can occur when the pod reads the certificates. Patch the cluster-wide object, image.config.openshift.io to include the following specification: spec: additionalTrustedCA: - name: registry-config As a result of this patch, the control plane nodes can retrieve images from the private registry and the HyperShift Operator can extract the OpenShift Container Platform payload for hosted cluster deployments. The process to patch the object might take several minutes to be completed. 6.2.5.2. Adding the registry CA to the worker nodes for the hosted cluster In order for the data plane workers in the hosted cluster to be able to retrieve images from the private registry, you need to add the registry CA to the worker nodes. Procedure In the hc.spec.additionalTrustBundle file, add the following specification: spec: additionalTrustBundle: - name: user-ca-bundle 1 1 The user-ca-bundle entry is a config map that you create in the step. In the same namespace where the HostedCluster object is created, create the user-ca-bundle config map. The config map resembles the following example: apiVersion: v1 data: ca-bundle.crt: | // Registry1 CA -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- // Registry2 CA -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- // Registry3 CA -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- kind: ConfigMap metadata: name: user-ca-bundle namespace: <hosted_cluster_namespace> 1 1 Specify the namespace where the HostedCluster object is created. 6.2.6. Creating a hosted cluster on OpenShift Virtualization A hosted cluster is an OpenShift Container Platform cluster with its control plane and API endpoint hosted on a management cluster. The hosted cluster includes the control plane and its corresponding data plane. 6.2.6.1. Requirements to deploy hosted control planes on OpenShift Virtualization As you prepare to deploy hosted control planes on OpenShift Virtualization, consider the following information: Run the management cluster on bare metal. Each hosted cluster must have a cluster-wide unique name. Do not use clusters as a hosted cluster name. A hosted cluster cannot be created in the namespace of a multicluster engine Operator managed cluster. When you configure storage for hosted control planes, consider the recommended etcd practices. To ensure that you meet the latency requirements, dedicate a fast storage device to all hosted control plane etcd instances that run on each control-plane node. You can use LVM storage to configure a local storage class for hosted etcd pods. For more information, see "Recommended etcd practices" and "Persistent storage using Logical Volume Manager storage". 6.2.6.2. Creating a hosted cluster with the KubeVirt platform by using the CLI To create a hosted cluster, you can use the hosted control plane command-line interface, hcp . Procedure Create a hosted cluster with the KubeVirt platform by entering the following command: USD hcp create cluster kubevirt \ --name <hosted_cluster_name> \ 1 --node-pool-replicas <node_pool_replica_count> \ 2 --pull-secret <path_to_pull_secret> \ 3 --memory <value_for_memory> \ 4 --cores <value_for_cpu> \ 5 --etcd-storage-class=<etcd_storage_class> 6 1 Specify the name of your hosted cluster, for instance, example . 2 Specify the node pool replica count, for example, 3 . You must specify the replica count as 0 or greater to create the same number of replicas. Otherwise, no node pools are created. 3 Specify the path to your pull secret, for example, /user/name/pullsecret . 4 Specify a value for memory, for example, 6Gi . 5 Specify a value for CPU, for example, 2 . 6 Specify the etcd storage class name, for example, lvm-storageclass . Note You can use the --release-image flag to set up the hosted cluster with a specific OpenShift Container Platform release. A default node pool is created for the cluster with two virtual machine worker replicas according to the --node-pool-replicas flag. After a few moments, verify that the hosted control plane pods are running by entering the following command: USD oc -n clusters-<hosted-cluster-name> get pods Example output NAME READY STATUS RESTARTS AGE capi-provider-5cc7b74f47-n5gkr 1/1 Running 0 3m catalog-operator-5f799567b7-fd6jw 2/2 Running 0 69s certified-operators-catalog-784b9899f9-mrp6p 1/1 Running 0 66s cluster-api-6bbc867966-l4dwl 1/1 Running 0 66s . . . redhat-operators-catalog-9d5fd4d44-z8qqk 1/1 Running 0 66s A hosted cluster that has worker nodes that are backed by KubeVirt virtual machines typically takes 10-15 minutes to be fully provisioned. To check the status of the hosted cluster, see the corresponding HostedCluster resource by entering the following command: USD oc get --namespace clusters hostedclusters See the following example output, which illustrates a fully provisioned HostedCluster object: Replace <4.x.0> with the supported OpenShift Container Platform version that you want to use. 6.2.6.3. Configuring the default ingress and DNS for hosted control planes on OpenShift Virtualization Every OpenShift Container Platform cluster includes a default application Ingress Controller, which must have an wildcard DNS record associated with it. By default, hosted clusters that are created by using the HyperShift KubeVirt provider automatically become a subdomain of the OpenShift Container Platform cluster that the KubeVirt virtual machines run on. For example, your OpenShift Container Platform cluster might have the following default ingress DNS entry: *.apps.mgmt-cluster.example.com As a result, a KubeVirt hosted cluster that is named guest and that runs on that underlying OpenShift Container Platform cluster has the following default ingress: *.apps.guest.apps.mgmt-cluster.example.com Procedure For the default ingress DNS to work properly, the cluster that hosts the KubeVirt virtual machines must allow wildcard DNS routes. You can configure this behavior by entering the following command: USD oc patch ingresscontroller -n openshift-ingress-operator default \ --type=json \ -p '[{ "op": "add", "path": "/spec/routeAdmission", "value": {wildcardPolicy: "WildcardsAllowed"}}]' Note When you use the default hosted cluster ingress, connectivity is limited to HTTPS traffic over port 443. Plain HTTP traffic over port 80 is rejected. This limitation applies to only the default ingress behavior. 6.2.6.4. Customizing ingress and DNS behavior If you do not want to use the default ingress and DNS behavior, you can configure a KubeVirt hosted cluster with a unique base domain at creation time. This option requires manual configuration steps during creation and involves three main steps: cluster creation, load balancer creation, and wildcard DNS configuration. 6.2.6.4.1. Deploying a hosted cluster that specifies the base domain To create a hosted cluster that specifies a base domain, complete the following steps. Procedure Enter the following command: USD hcp create cluster kubevirt \ --name <hosted_cluster_name> \ 1 --node-pool-replicas <worker_count> \ 2 --pull-secret <path_to_pull_secret> \ 3 --memory <value_for_memory> \ 4 --cores <value_for_cpu> \ 5 --base-domain <basedomain> 6 1 Specify the name of your hosted cluster. 2 Specify the worker count, for example, 2 . 3 Specify the path to your pull secret, for example, /user/name/pullsecret . 4 Specify a value for memory, for example, 6Gi . 5 Specify a value for CPU, for example, 2 . 6 Specify the base domain, for example, hypershift.lab . As a result, the hosted cluster has an ingress wildcard that is configured for the cluster name and the base domain, for example, .apps.example.hypershift.lab . The hosted cluster remains in Partial status because after you create a hosted cluster with unique base domain, you must configure the required DNS records and load balancer. View the status of your hosted cluster by entering the following command: USD oc get --namespace clusters hostedclusters Example output NAME VERSION KUBECONFIG PROGRESS AVAILABLE PROGRESSING MESSAGE example example-admin-kubeconfig Partial True False The hosted control plane is available Access the cluster by entering the following commands: USD hcp create kubeconfig --name <hosted_cluster_name> \ > <hosted_cluster_name>-kubeconfig USD oc --kubeconfig <hosted_cluster_name>-kubeconfig get co Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE console <4.x.0> False False False 30m RouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.example.hypershift.lab): Get "https://console-openshift-console.apps.example.hypershift.lab": dial tcp: lookup console-openshift-console.apps.example.hypershift.lab on 172.31.0.10:53: no such host ingress <4.x.0> True False True 28m The "default" ingress controller reports Degraded=True: DegradedConditions: One or more other status conditions indicate a degraded state: CanaryChecksSucceeding=False (CanaryChecksRepetitiveFailures: Canary route checks for the default ingress controller are failing) Replace <4.x.0> with the supported OpenShift Container Platform version that you want to use. steps To fix the errors in the output, complete the steps in "Setting up the load balancer" and "Setting up a wildcard DNS". Note If your hosted cluster is on bare metal, you might need MetalLB to set up load balancer services. For more information, see "Configuring MetalLB". 6.2.6.4.2. Setting up the load balancer Set up the load balancer service that routes ingress traffic to the KubeVirt VMs and assigns a wildcard DNS entry to the load balancer IP address. Procedure A NodePort service that exposes the hosted cluster ingress already exists. You can export the node ports and create the load balancer service that targets those ports. Get the HTTP node port by entering the following command: USD oc --kubeconfig <hosted_cluster_name>-kubeconfig get services \ -n openshift-ingress router-nodeport-default \ -o jsonpath='{.spec.ports[?(@.name=="http")].nodePort}' Note the HTTP node port value to use in the step. Get the HTTPS node port by entering the following command: USD oc --kubeconfig <hosted_cluster_name>-kubeconfig get services \ -n openshift-ingress router-nodeport-default \ -o jsonpath='{.spec.ports[?(@.name=="https")].nodePort}' Note the HTTPS node port value to use in the step. Create the load balancer service by entering the following command: oc apply -f - apiVersion: v1 kind: Service metadata: labels: app: <hosted_cluster_name> name: <hosted_cluster_name>-apps namespace: clusters-<hosted_cluster_name> spec: ports: - name: https-443 port: 443 protocol: TCP targetPort: <https_node_port> 1 - name: http-80 port: 80 protocol: TCP targetPort: <http-node-port> 2 selector: kubevirt.io: virt-launcher type: LoadBalancer 1 Specify the HTTPS node port value that you noted in the step. 2 Specify the HTTP node port value that you noted in the step. 6.2.6.4.3. Setting up a wildcard DNS Set up a wildcard DNS record or CNAME that references the external IP of the load balancer service. Procedure Get the external IP address by entering the following command: USD oc -n clusters-<hosted_cluster_name> get service <hosted-cluster-name>-apps \ -o jsonpath='{.status.loadBalancer.ingress[0].ip}' Example output 192.168.20.30 Configure a wildcard DNS entry that references the external IP address. View the following example DNS entry: *.apps.<hosted_cluster_name\>.<base_domain\>. The DNS entry must be able to route inside and outside of the cluster. DNS resolutions example dig +short test.apps.example.hypershift.lab 192.168.20.30 Check that hosted cluster status has moved from Partial to Completed by entering the following command: USD oc get --namespace clusters hostedclusters Example output NAME VERSION KUBECONFIG PROGRESS AVAILABLE PROGRESSING MESSAGE example <4.x.0> example-admin-kubeconfig Completed True False The hosted control plane is available Replace <4.x.0> with the supported OpenShift Container Platform version that you want to use. 6.2.7. Finishing the deployment You can monitor the deployment of a hosted cluster from two perspectives: the control plane and the data plane. 6.2.7.1. Monitoring the control plane While the deployment proceeds, you can monitor the control plane by gathering information about the following artifacts: The HyperShift Operator The HostedControlPlane pod The bare metal hosts The agents The InfraEnv resource The HostedCluster and NodePool resources Procedure Enter the following commands to monitor the control plane: USD export KUBECONFIG=/root/.kcli/clusters/hub-ipv4/auth/kubeconfig USD watch "oc get pod -n hypershift;echo;echo;\ oc get pod -n clusters-hosted-ipv4;echo;echo;\ oc get bmh -A;echo;echo;\ oc get agent -A;echo;echo;\ oc get infraenv -A;echo;echo;\ oc get hostedcluster -A;echo;echo;\ oc get nodepool -A;echo;echo;" 6.2.7.2. Monitoring the data plane While the deployment proceeds, you can monitor the data plane by gathering information about the following artifacts: The cluster version The nodes, specifically, about whether the nodes joined the cluster The cluster Operators Procedure Enter the following commands: 6.3. Deploying hosted control planes on bare metal in a disconnected environment When you provision hosted control planes on bare metal, you use the Agent platform. The Agent platform and multicluster engine for Kubernetes Operator work together to enable disconnected deployments. The Agent platform uses the central infrastructure management service to add worker nodes to a hosted cluster. For an introduction to the central infrastructure management service, see Enabling the central infrastructure management service . 6.3.1. Disconnected environment architecture for bare metal The following diagram illustrates an example architecture of a disconnected environment: Configure infrastructure services, including the registry certificate deployment with TLS support, web server, and DNS, to ensure that the disconnected deployment works. Create a config map in the openshift-config namespace. In this example, the config map is named registry-config . The content of the config map is the Registry CA certificate. The data field of the config map must contain the following key/value: Key: <registry_dns_domain_name>..<port> , for example, registry.hypershiftdomain.lab..5000: . Ensure that you place .. after the registry DNS domain name when you specify a port. Value: The certificate content For more information about creating a config map, see Configuring TLS certificates for a disconnected installation of hosted control planes . Modify the images.config.openshift.io custom resource (CR) specification and adds a new field named additionalTrustedCA with a value of name: registry-config . Create a config map that contains two data fields. One field contains the registries.conf file in RAW format, and the other field contains the Registry CA and is named ca-bundle.crt . The config map belongs to the multicluster-engine namespace, and the config map name is referenced in other objects. For an example of a config map, see the following sample configuration: apiVersion: v1 kind: ConfigMap metadata: name: custom-registries namespace: multicluster-engine labels: app: assisted-service data: ca-bundle.crt: | -----BEGIN CERTIFICATE----- # ... -----END CERTIFICATE----- registries.conf: | unqualified-search-registries = ["registry.access.redhat.com", "docker.io"] [[registry]] prefix = "" location = "registry.redhat.io/openshift4" mirror-by-digest-only = true [[registry.mirror]] location = "registry.ocp-edge-cluster-0.qe.lab.redhat.com:5000/openshift4" [[registry]] prefix = "" location = "registry.redhat.io/rhacm2" mirror-by-digest-only = true # ... # ... In the multicluster engine Operator namespace, you create the multiclusterengine CR, which enables both the Agent and hypershift-addon add-ons. The multicluster engine Operator namespace must contain the config maps to modify behavior in a disconnected deployment. The namespace also contains the multicluster-engine , assisted-service , and hypershift-addon-manager pods. Create the following objects that are necessary to deploy the hosted cluster: Secrets: Secrets contain the pull secret, SSH key, and etcd encryption key. Config map: The config map contains the CA certificate of the private registry. HostedCluster : The HostedCluster resource defines the configuration of the cluster that the user intends to create. NodePool : The NodePool resource identifies the node pool that references the machines to use for the data plane. After you create the hosted cluster objects, the HyperShift Operator establishes the HostedControlPlane namespace to accommodate control plane pods. The namespace also hosts components such as Agents, bare metal hosts (BMHs), and the InfraEnv resource. Later, you create the InfraEnv resource, and after ISO creation, you create the BMHs and their secrets that contain baseboard management controller (BMC) credentials. The Metal3 Operator in the openshift-machine-api namespace inspects the new BMHs. Then, the Metal3 Operator tries to connect to the BMCs to start them by using the configured LiveISO and RootFS values that are specified through the AgentServiceConfig CR in the multicluster engine Operator namespace. After the worker nodes of the HostedCluster resource are started, an Agent container is started. This agent establishes contact with the Assisted Service, which orchestrates the actions to complete the deployment. Initially, you need to scale the NodePool resource to the number of worker nodes for the HostedCluster resource. The Assisted Service manages the remaining tasks. At this point, you wait for the deployment process to be completed. 6.3.2. Requirements to deploy hosted control planes on bare metal in a disconnected environment To configure hosted control planes in a disconnected environment, you must meet the following prerequisites: CPU: The number of CPUs provided determines how many hosted clusters can run concurrently. In general, use 16 CPUs for each node for 3 nodes. For minimal development, you can use 12 CPUs for each node for 3 nodes. Memory: The amount of RAM affects how many hosted clusters can be hosted. Use 48 GB of RAM for each node. For minimal development, 18 GB of RAM might be sufficient. Storage: Use SSD storage for multicluster engine Operator. Management cluster: 250 GB. Registry: The storage needed depends on the number of releases, operators, and images that are hosted. An acceptable number might be 500 GB, preferably separated from the disk that hosts the hosted cluster. Web server: The storage needed depends on the number of ISOs and images that are hosted. An acceptable number might be 500 GB. Production: For a production environment, separate the management cluster, the registry, and the web server on different disks. This example illustrates a possible configuration for production: Registry: 2 TB Management cluster: 500 GB Web server: 2 TB 6.3.3. Extracting the release image digest You can extract the OpenShift Container Platform release image digest by using the tagged image. Procedure Obtain the image digest by running the following command: USD oc adm release info <tagged_openshift_release_image> | grep "Pull From" Replace <tagged_openshift_release_image> with the tagged image for the supported OpenShift Container Platform version, for example, quay.io/openshift-release-dev/ocp-release:4.14.0-x8_64 . Example output 6.3.4. Configuring the hypervisor for a disconnected installation of hosted control planes The following information applies to virtual machine environments only. Procedure To deploy a virtual management cluster, access the required packages by entering the following command: USD sudo dnf install dnsmasq radvd vim golang podman bind-utils \ net-tools httpd-tools tree htop strace tmux -y Enable and start the Podman service by entering the following command: USD systemctl enable --now podman To use kcli to deploy the management cluster and other virtual components, install and configure the hypervisor by entering the following commands: USD sudo yum -y install libvirt libvirt-daemon-driver-qemu qemu-kvm USD sudo usermod -aG qemu,libvirt USD(id -un) USD sudo newgrp libvirt USD sudo systemctl enable --now libvirtd USD sudo dnf -y copr enable karmab/kcli USD sudo dnf -y install kcli USD sudo kcli create pool -p /var/lib/libvirt/images default USD kcli create host kvm -H 127.0.0.1 local USD sudo setfacl -m u:USD(id -un):rwx /var/lib/libvirt/images USD kcli create network -c 192.168.122.0/24 default Enable the network manager dispatcher to ensure that virtual machines can resolve the required domains, routes, and registries. To enable the network manager dispatcher, in the /etc/NetworkManager/dispatcher.d/ directory, create a script named forcedns that contains the following content: #!/bin/bash export IP="192.168.126.1" 1 export BASE_RESOLV_CONF="/run/NetworkManager/resolv.conf" if ! [[ `grep -q "USDIP" /etc/resolv.conf` ]]; then export TMP_FILE=USD(mktemp /etc/forcedns_resolv.conf.XXXXXX) cp USDBASE_RESOLV_CONF USDTMP_FILE chmod --reference=USDBASE_RESOLV_CONF USDTMP_FILE sed -i -e "s/dns.base.domain.name//" \ -e "s/search /& dns.base.domain.name /" \ -e "0,/nameserver/s/nameserver/& USDIP\n&/" USDTMP_FILE 2 mv USDTMP_FILE /etc/resolv.conf fi echo "ok" 1 Modify the IP variable to point to the IP address of the hypervisor interface that hosts the OpenShift Container Platform management cluster. 2 Replace dns.base.domain.name with the DNS base domain name. After you create the file, add permissions by entering the following command: USD chmod 755 /etc/NetworkManager/dispatcher.d/forcedns Run the script and verify that the output returns ok . Configure ksushy to simulate baseboard management controllers (BMCs) for the virtual machines. Enter the following commands: USD sudo dnf install python3-pyOpenSSL.noarch python3-cherrypy -y USD kcli create sushy-service --ssl --ipv6 --port 9000 USD sudo systemctl daemon-reload USD systemctl enable --now ksushy Test whether the service is correctly functioning by entering the following command: USD systemctl status ksushy If you are working in a development environment, configure the hypervisor system to allow various types of connections through different virtual networks within the environment. Note If you are working in a production environment, you must establish proper rules for the firewalld service and configure SELinux policies to maintain a secure environment. For SELinux, enter the following command: USD sed -i s/^SELINUX=.*USD/SELINUX=permissive/ /etc/selinux/config; \ setenforce 0 For firewalld , enter the following command: USD systemctl disable --now firewalld For libvirtd , enter the following commands: USD systemctl restart libvirtd USD systemctl enable --now libvirtd 6.3.5. DNS configurations on bare metal The API Server for the hosted cluster is exposed as a NodePort service. A DNS entry must exist for api.<hosted_cluster_name>.<base_domain> that points to destination where the API Server can be reached. The DNS entry can be as simple as a record that points to one of the nodes in the managed cluster that is running the hosted control plane. The entry can also point to a load balancer that is deployed to redirect incoming traffic to the ingress pods. Example DNS configuration api.example.krnl.es. IN A 192.168.122.20 api.example.krnl.es. IN A 192.168.122.21 api.example.krnl.es. IN A 192.168.122.22 api-int.example.krnl.es. IN A 192.168.122.20 api-int.example.krnl.es. IN A 192.168.122.21 api-int.example.krnl.es. IN A 192.168.122.22 `*`.apps.example.krnl.es. IN A 192.168.122.23 If you are configuring DNS for a disconnected environment on an IPv6 network, the configuration looks like the following example. Example DNS configuration for an IPv6 network api.example.krnl.es. IN A 2620:52:0:1306::5 api.example.krnl.es. IN A 2620:52:0:1306::6 api.example.krnl.es. IN A 2620:52:0:1306::7 api-int.example.krnl.es. IN A 2620:52:0:1306::5 api-int.example.krnl.es. IN A 2620:52:0:1306::6 api-int.example.krnl.es. IN A 2620:52:0:1306::7 `*`.apps.example.krnl.es. IN A 2620:52:0:1306::10 If you are configuring DNS for a disconnected environment on a dual stack network, be sure to include DNS entries for both IPv4 and IPv6. Example DNS configuration for a dual stack network host-record=api-int.hub-dual.dns.base.domain.name,192.168.126.10 host-record=api.hub-dual.dns.base.domain.name,192.168.126.10 address=/apps.hub-dual.dns.base.domain.name/192.168.126.11 dhcp-host=aa:aa:aa:aa:10:01,ocp-master-0,192.168.126.20 dhcp-host=aa:aa:aa:aa:10:02,ocp-master-1,192.168.126.21 dhcp-host=aa:aa:aa:aa:10:03,ocp-master-2,192.168.126.22 dhcp-host=aa:aa:aa:aa:10:06,ocp-installer,192.168.126.25 dhcp-host=aa:aa:aa:aa:10:07,ocp-bootstrap,192.168.126.26 host-record=api-int.hub-dual.dns.base.domain.name,2620:52:0:1306::2 host-record=api.hub-dual.dns.base.domain.name,2620:52:0:1306::2 address=/apps.hub-dual.dns.base.domain.name/2620:52:0:1306::3 dhcp-host=aa:aa:aa:aa:10:01,ocp-master-0,[2620:52:0:1306::5] dhcp-host=aa:aa:aa:aa:10:02,ocp-master-1,[2620:52:0:1306::6] dhcp-host=aa:aa:aa:aa:10:03,ocp-master-2,[2620:52:0:1306::7] dhcp-host=aa:aa:aa:aa:10:06,ocp-installer,[2620:52:0:1306::8] dhcp-host=aa:aa:aa:aa:10:07,ocp-bootstrap,[2620:52:0:1306::9] 6.3.6. Deploying a registry for hosted control planes in a disconnected environment For development environments, deploy a small, self-hosted registry by using a Podman container. For production environments, deploy an enterprise-hosted registry, such as Red Hat Quay, Nexus, or Artifactory. Procedure To deploy a small registry by using Podman, complete the following steps: As a privileged user, access the USD{HOME} directory and create the following script: #!/usr/bin/env bash set -euo pipefail PRIMARY_NIC=USD(ls -1 /sys/class/net | grep -v podman | head -1) export PATH=/root/bin:USDPATH export PULL_SECRET="/root/baremetal/hub/openshift_pull.json" 1 if [[ ! -f USDPULL_SECRET ]];then echo "Pull Secret not found, exiting..." exit 1 fi dnf -y install podman httpd httpd-tools jq skopeo libseccomp-devel export IP=USD(ip -o addr show USDPRIMARY_NIC | head -1 | awk '{print USD4}' | cut -d'/' -f1) REGISTRY_NAME=registry.USD(hostname --long) REGISTRY_USER=dummy REGISTRY_PASSWORD=dummy KEY=USD(echo -n USDREGISTRY_USER:USDREGISTRY_PASSWORD | base64) echo "{\"auths\": {\"USDREGISTRY_NAME:5000\": {\"auth\": \"USDKEY\", \"email\": \"[email protected]\"}}}" > /root/disconnected_pull.json mv USD{PULL_SECRET} /root/openshift_pull.json.old jq ".auths += {\"USDREGISTRY_NAME:5000\": {\"auth\": \"USDKEY\",\"email\": \"[email protected]\"}}" < /root/openshift_pull.json.old > USDPULL_SECRET mkdir -p /opt/registry/{auth,certs,data,conf} cat <<EOF > /opt/registry/conf/config.yml version: 0.1 log: fields: service: registry storage: cache: blobdescriptor: inmemory filesystem: rootdirectory: /var/lib/registry delete: enabled: true http: addr: :5000 headers: X-Content-Type-Options: [nosniff] health: storagedriver: enabled: true interval: 10s threshold: 3 compatibility: schema1: enabled: true EOF openssl req -newkey rsa:4096 -nodes -sha256 -keyout /opt/registry/certs/domain.key -x509 -days 3650 -out /opt/registry/certs/domain.crt -subj "/C=US/ST=Madrid/L=San Bernardo/O=Karmalabs/OU=Guitar/CN=USDREGISTRY_NAME" -addext "subjectAltName=DNS:USDREGISTRY_NAME" cp /opt/registry/certs/domain.crt /etc/pki/ca-trust/source/anchors/ update-ca-trust extract htpasswd -bBc /opt/registry/auth/htpasswd USDREGISTRY_USER USDREGISTRY_PASSWORD podman create --name registry --net host --security-opt label=disable --replace -v /opt/registry/data:/var/lib/registry:z -v /opt/registry/auth:/auth:z -v /opt/registry/conf/config.yml:/etc/docker/registry/config.yml -e "REGISTRY_AUTH=htpasswd" -e "REGISTRY_AUTH_HTPASSWD_REALM=Registry" -e "REGISTRY_HTTP_SECRET=ALongRandomSecretForRegistry" -e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd -v /opt/registry/certs:/certs:z -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt -e REGISTRY_HTTP_TLS_KEY=/certs/domain.key docker.io/library/registry:latest [ "USD?" == "0" ] || !! systemctl enable --now registry 1 Replace the location of the PULL_SECRET with the appropriate location for your setup. Name the script file registry.sh and save it. When you run the script, it pulls in the following information: The registry name, based on the hypervisor hostname The necessary credentials and user access details Adjust permissions by adding the execution flag as follows: USD chmod u+x USD{HOME}/registry.sh To run the script without any parameters, enter the following command: USD USD{HOME}/registry.sh The script starts the server. The script uses a systemd service for management purposes. If you need to manage the script, you can use the following commands: USD systemctl status USD systemctl start USD systemctl stop The root folder for the registry is in the /opt/registry directory and contains the following subdirectories: certs contains the TLS certificates. auth contains the credentials. data contains the registry images. conf contains the registry configuration. 6.3.7. Setting up a management cluster for hosted control planes in a disconnected environment To set up an OpenShift Container Platform management cluster, you can use dev-scripts, or if you are based on virtual machines, you can use the kcli tool. The following instructions are specific to the kcli tool. Procedure Ensure that the right networks are prepared for use in the hypervisor. The networks will host both the management and hosted clusters. Enter the following kcli command: USD kcli create network -c 192.168.126.0/24 -P dhcp=false -P dns=false \ -d 2620:52:0:1306::0/64 --domain dns.base.domain.name --nodhcp dual where: -c specifies the CIDR for the network. -P dhcp=false configures the network to disable the DHCP, which is handled by the dnsmasq that you configured. -P dns=false configures the network to disable the DNS, which is also handled by the dnsmasq that you configured. --domain sets the domain to search. dns.base.domain.name is the DNS base domain name. dual is the name of the network that you are creating. After the network is created, review the following output: [root@hypershiftbm ~]# kcli list network Listing Networks... +---------+--------+---------------------+-------+------------------+------+ | Network | Type | Cidr | Dhcp | Domain | Mode | +---------+--------+---------------------+-------+------------------+------+ | default | routed | 192.168.122.0/24 | True | default | nat | | ipv4 | routed | 2620:52:0:1306::/64 | False | dns.base.domain.name | nat | | ipv4 | routed | 192.168.125.0/24 | False | dns.base.domain.name | nat | | ipv6 | routed | 2620:52:0:1305::/64 | False | dns.base.domain.name | nat | +---------+--------+---------------------+-------+------------------+------+ [root@hypershiftbm ~]# kcli info network ipv6 Providing information about network ipv6... cidr: 2620:52:0:1306::/64 dhcp: false domain: dns.base.domain.name mode: nat plan: kvirt type: routed Ensure that the pull secret and kcli plan files are in place so that you can deploy the OpenShift Container Platform management cluster: Confirm that the pull secret is in the same folder as the kcli plan, and that the pull secret file is named openshift_pull.json . Add the kcli plan, which contains the OpenShift Container Platform definition, in the mgmt-compact-hub-dual.yaml file. Ensure that you update the file contents to match your environment: plan: hub-dual force: true version: stable tag: "<4.x.y>-x86_64" 1 cluster: "hub-dual" dualstack: true domain: dns.base.domain.name api_ip: 192.168.126.10 ingress_ip: 192.168.126.11 service_networks: - 172.30.0.0/16 - fd02::/112 cluster_networks: - 10.132.0.0/14 - fd01::/48 disconnected_url: registry.dns.base.domain.name:5000 disconnected_update: true disconnected_user: dummy disconnected_password: dummy disconnected_operators_version: v4.14 disconnected_operators: - name: metallb-operator - name: lvms-operator channels: - name: stable-4.14 disconnected_extra_images: - quay.io/user-name/trbsht:latest - quay.io/user-name/hypershift:BMSelfManage-v4.14-rc-v3 - registry.redhat.io/openshift4/ose-kube-rbac-proxy:v4.10 dualstack: true disk_size: 200 extra_disks: [200] memory: 48000 numcpus: 16 ctlplanes: 3 workers: 0 manifests: extra-manifests metal3: true network: dual users_dev: developer users_devpassword: developer users_admin: admin users_adminpassword: admin metallb_pool: dual-virtual-network metallb_ranges: - 192.168.126.150-192.168.126.190 metallb_autoassign: true apps: - users - lvms-operator - metallb-operator vmrules: - hub-bootstrap: nets: - name: ipv6 mac: aa:aa:aa:aa:10:07 - hub-ctlplane-0: nets: - name: ipv6 mac: aa:aa:aa:aa:10:01 - hub-ctlplane-1: nets: - name: ipv6 mac: aa:aa:aa:aa:10:02 - hub-ctlplane-2: nets: - name: ipv6 mac: aa:aa:aa:aa:10:03 1 Replace <4.x.y> with the supported OpenShift Container Platform version you want to use. To provision the management cluster, enter the following command: USD kcli create cluster openshift --pf mgmt-compact-hub-dual.yaml steps , configure the web server. 6.3.8. Configuring the web server for hosted control planes in a disconnected environment You need to configure an additional web server to host the Red Hat Enterprise Linux CoreOS (RHCOS) images that are associated with the OpenShift Container Platform release that you are deploying as a hosted cluster. Procedure To configure the web server, complete the following steps: Extract the openshift-install binary from the OpenShift Container Platform release that you want to use by entering the following command: USD oc adm -a USD{LOCAL_SECRET_JSON} release extract --command=openshift-install \ "USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}" Run the following script. The script creates a folder in the /opt/srv directory. The folder contains the RHCOS images to provision the worker nodes. #!/bin/bash WEBSRV_FOLDER=/opt/srv ROOTFS_IMG_URL="USD(./openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.artifacts.metal.formats.pxe.rootfs.location')" 1 LIVE_ISO_URL="USD(./openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.artifacts.metal.formats.iso.disk.location')" 2 mkdir -p USD{WEBSRV_FOLDER}/images curl -Lk USD{ROOTFS_IMG_URL} -o USD{WEBSRV_FOLDER}/images/USD{ROOTFS_IMG_URL##*/} curl -Lk USD{LIVE_ISO_URL} -o USD{WEBSRV_FOLDER}/images/USD{LIVE_ISO_URL##*/} chmod -R 755 USD{WEBSRV_FOLDER}/* ## Run Webserver podman ps --noheading | grep -q websrv-ai if [[ USD? == 0 ]];then echo "Launching Registry pod..." /usr/bin/podman run --name websrv-ai --net host -v /opt/srv:/usr/local/apache2/htdocs:z quay.io/alosadag/httpd:p8080 fi 1 You can find the ROOTFS_IMG_URL value on the OpenShift CI Release page. 2 You can find the LIVE_ISO_URL value on the OpenShift CI Release page. After the download is completed, a container runs to host the images on a web server. The container uses a variation of the official HTTPd image, which also enables it to work with IPv6 networks. 6.3.9. Configuring image mirroring for hosted control planes in a disconnected environment Image mirroring is the process of fetching images from external registries, such as registry.redhat.com or quay.io , and storing them in your private registry. In the following procedures, the oc-mirror tool is used, which is a binary that uses the ImageSetConfiguration object. In the file, you can specify the following information: The OpenShift Container Platform versions to mirror. The versions are in quay.io . The additional Operators to mirror. Select packages individually. The extra images that you want to add to the repository. Prerequisites Ensure that the registry server is running before you start the mirroring process. Procedure To configure image mirroring, complete the following steps: Ensure that your USD{HOME}/.docker/config.json file is updated with the registries that you are going to mirror from and with the private registry that you plan to push the images to. By using the following example, create an ImageSetConfiguration object to use for mirroring. Replace values as needed to match your environment: apiVersion: mirror.openshift.io/v2alpha1 kind: ImageSetConfiguration mirror: platform: channels: - name: candidate-4.18 minVersion: <4.x.y-build> 1 maxVersion: <4.x.y-build> 2 type: ocp kubeVirtContainer: true 3 graph: true additionalImages: 4 - name: quay.io/karmab/origin-keepalived-ipfailover:latest - name: quay.io/karmab/kubectl:latest - name: quay.io/karmab/haproxy:latest - name: quay.io/karmab/mdns-publisher:latest - name: quay.io/karmab/origin-coredns:latest - name: quay.io/karmab/curl:latest - name: quay.io/karmab/kcli:latest - name: quay.io/user-name/trbsht:latest - name: quay.io/user-name/hypershift:BMSelfManage-v4.18 - name: registry.redhat.io/openshift4/ose-kube-rbac-proxy:v4.10 operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.18 packages: - name: lvms-operator - name: local-storage-operator - name: odf-csi-addons-operator - name: odf-operator - name: mcg-operator - name: ocs-operator - name: metallb-operator - name: kubevirt-hyperconverged 5 1 2 Replace <4.x.y-build> with the supported OpenShift Container Platform version you want to use. 3 Set this optional flag to true if you want to also mirror the container disk image for the Red Hat Enterprise Linux CoreOS (RHCOS) boot image for the KubeVirt provider. This flag is available with oc-mirror v2 only. 4 Images specified in the additionalImages field are examples only and are not strictly needed. 5 For deployments that use the KubeVirt provider, include this line. Start the mirroring process by entering the following command: USD oc-mirror --v2 --config imagesetconfig.yaml \ --workspace file://mirror-file docker://<registry> After the mirroring process is finished, you have a new folder named mirror-file , which contains the ImageDigestMirrorSet (IDMS), ImageTagMirrorSet (ITMS), and the catalog sources to apply on the hosted cluster. Mirror the nightly or CI versions of OpenShift Container Platform by configuring the imagesetconfig.yaml file as follows: apiVersion: mirror.openshift.io/v2alpha1 kind: ImageSetConfiguration mirror: platform: graph: true release: registry.ci.openshift.org/ocp/release:<4.x.y-build> 1 kubeVirtContainer: true 2 # ... 1 Replace <4.x.y-build> with the supported OpenShift Container Platform version you want to use. 2 Set this optional flag to true if you want to also mirror the container disk image for the Red Hat Enterprise Linux CoreOS (RHCOS) boot image for the KubeVirt provider. This flag is available with oc-mirror v2 only. If you have a partially disconnected environment, mirror the images from the image set configuration to a registry by entering the following command: USD oc mirror -c imagesetconfig.yaml \ --workspace file://<file_path> docker://<mirror_registry_url> --v2 For more information, see "Mirroring an image set in a partially disconnected environment". If you have a fully disconnected environment, perform the following steps: Mirror the images from the specified image set configuration to the disk by entering the following command: USD oc mirror -c imagesetconfig.yaml file://<file_path> --v2 For more information, see "Mirroring an image set in a fully disconnected environment". Process the image set file on the disk and mirror the contents to a target mirror registry by entering the following command: USD oc mirror -c imagesetconfig.yaml \ --from file://<file_path> docker://<mirror_registry_url> --v2 Mirror the latest multicluster engine Operator images by following the steps in Install on disconnected networks . Additional resources Mirroring an image set in a partially disconnected environment Mirroring an image set in a fully disconnected environment 6.3.10. Applying objects in the management cluster After the mirroring process is complete, you need to apply two objects in the management cluster: ImageContentSourcePolicy (ICSP) or ImageDigestMirrorSet (IDMS) Catalog sources When you use the oc-mirror tool, the output artifacts are in a folder named oc-mirror-workspace/results-XXXXXX/ . The ICSP or IDMS initiates a MachineConfig change that does not restart your nodes but restarts the kubelet on each of them. After the nodes are marked as READY , you need to apply the newly generated catalog sources. The catalog sources initiate actions in the openshift-marketplace Operator, such as downloading the catalog image and processing it to retrieve all the PackageManifests that are included in that image. Procedure To check the new sources, run the following command by using the new CatalogSource as a source: USD oc get packagemanifest To apply the artifacts, complete the following steps: Create the ICSP or IDMS artifacts by entering the following command: USD oc apply -f oc-mirror-workspace/results-XXXXXX/imageContentSourcePolicy.yaml Wait for the nodes to become ready, and then enter the following command: USD oc apply -f catalogSource-XXXXXXXX-index.yaml Mirror the OLM catalogs and configure the hosted cluster to point to the mirror. When you use the management (default) OLMCatalogPlacement mode, the image stream that is used for OLM catalogs is not automatically amended with override information from the ICSP on the management cluster. If the OLM catalogs are properly mirrored to an internal registry by using the original name and tag, add the hypershift.openshift.io/olm-catalogs-is-registry-overrides annotation to the HostedCluster resource. The format is "sr1=dr1,sr2=dr2" , where the source registry string is a key and the destination registry is a value. To bypass the OLM catalog image stream mechanism, use the following four annotations on the HostedCluster resource to directly specify the addresses of the four images to use for OLM Operator catalogs: hypershift.openshift.io/certified-operators-catalog-image hypershift.openshift.io/community-operators-catalog-image hypershift.openshift.io/redhat-marketplace-catalog-image hypershift.openshift.io/redhat-operators-catalog-image In this case, the image stream is not created, and you must update the value of the annotations when the internal mirror is refreshed to pull in Operator updates. steps Deploy the multicluster engine Operator by completing the steps in Deploying multicluster engine Operator for a disconnected installation of hosted control planes . Additional resources Mirroring images for a disconnected installation by using the oc-mirror plugin v2 . 6.3.11. Deploying multicluster engine Operator for a disconnected installation of hosted control planes The multicluster engine for Kubernetes Operator plays a crucial role in deploying clusters across providers. If you do not have multicluster engine Operator installed, review the following documentation to understand the prerequisites and steps to install it: About cluster lifecycle with multicluster engine operator Installing and upgrading multicluster engine operator 6.3.11.1. Deploying AgentServiceConfig resources The AgentServiceConfig custom resource is an essential component of the Assisted Service add-on that is part of multicluster engine Operator. It is responsible for bare metal cluster deployment. When the add-on is enabled, you deploy the AgentServiceConfig resource to configure the add-on. In addition to configuring the AgentServiceConfig resource, you need to include additional config maps to ensure that multicluster engine Operator functions properly in a disconnected environment. Procedure Configure the custom registries by adding the following config map, which contains the disconnected details to customize the deployment: apiVersion: v1 kind: ConfigMap metadata: name: custom-registries namespace: multicluster-engine labels: app: assisted-service data: ca-bundle.crt: | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- registries.conf: | unqualified-search-registries = ["registry.access.redhat.com", "docker.io"] [[registry]] prefix = "" location = "registry.redhat.io/openshift4" mirror-by-digest-only = true [[registry.mirror]] location = "registry.dns.base.domain.name:5000/openshift4" 1 [[registry]] prefix = "" location = "registry.redhat.io/rhacm2" mirror-by-digest-only = true # ... # ... 1 Replace dns.base.domain.name with the DNS base domain name. The object contains two fields: Custom CAs: This field contains the Certificate Authorities (CAs) that are loaded into the various processes of the deployment. Registries: The Registries.conf field contains information about images and namespaces that need to be consumed from a mirror registry rather than the original source registry. Configure the Assisted Service by adding the AssistedServiceConfig object, as shown in the following example: apiVersion: agent-install.openshift.io/v1beta1 kind: AgentServiceConfig metadata: annotations: unsupported.agent-install.openshift.io/assisted-service-configmap: assisted-service-config 1 name: agent namespace: multicluster-engine spec: mirrorRegistryRef: name: custom-registries 2 databaseStorage: storageClassName: lvms-vg1 accessModes: - ReadWriteOnce resources: requests: storage: 10Gi filesystemStorage: storageClassName: lvms-vg1 accessModes: - ReadWriteOnce resources: requests: storage: 20Gi osImages: 3 - cpuArchitecture: x86_64 4 openshiftVersion: "4.14" rootFSUrl: http://registry.dns.base.domain.name:8080/images/rhcos-414.92.202308281054-0-live-rootfs.x86_64.img 5 url: http://registry.dns.base.domain.name:8080/images/rhcos-414.92.202308281054-0-live.x86_64.iso version: 414.92.202308281054-0 - cpuArchitecture: x86_64 openshiftVersion: "4.15" rootFSUrl: http://registry.dns.base.domain.name:8080/images/rhcos-415.92.202403270524-0-live-rootfs.x86_64.img url: http://registry.dns.base.domain.name:8080/images/rhcos-415.92.202403270524-0-live.x86_64.iso version: 415.92.202403270524-0 1 The metadata.annotations["unsupported.agent-install.openshift.io/assisted-service-configmap"] annotation references the config map name that the Operator consumes to customize behavior. 2 The spec.mirrorRegistryRef.name annotation points to the config map that contains disconnected registry information that the Assisted Service Operator consumes. This config map adds those resources during the deployment process. 3 The spec.osImages field contains different versions available for deployment by this Operator. This field is mandatory. This example assumes that you already downloaded the RootFS and LiveISO files. 4 Add a cpuArchitecture subsection for every OpenShift Container Platform release that you want to deploy. In this example, cpuArchitecture subsections are included for 4.14 and 4.15. 5 In the rootFSUrl and url fields, replace dns.base.domain.name with the DNS base domain name. Deploy all of the objects by concatenating them into a single file and applying them to the management cluster. To do so, enter the following command: USD oc apply -f agentServiceConfig.yaml The command triggers two pods. Example output assisted-image-service-0 1/1 Running 2 11d 1 assisted-service-668b49548-9m7xw 2/2 Running 5 11d 2 1 The assisted-image-service pod is responsible for creating the Red Hat Enterprise Linux CoreOS (RHCOS) boot image template, which is customized for each cluster that you deploy. 2 The assisted-service refers to the Operator. steps Configure TLS certificates by completing the steps in Configuring TLS certificates for a disconnected installation of hosted control planes . 6.3.12. Configuring TLS certificates for a disconnected installation of hosted control planes To ensure proper function in a disconnected deployment, you need to configure the registry CA certificates in the management cluster and the worker nodes for the hosted cluster. 6.3.12.1. Adding the registry CA to the management cluster To add the registry CA to the management cluster, complete the following steps. Procedure Create a config map that resembles the following example: apiVersion: v1 kind: ConfigMap metadata: name: <config_map_name> 1 namespace: <config_map_namespace> 2 data: 3 <registry_name>..<port>: | 4 -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- <registry_name>..<port>: | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- <registry_name>..<port>: | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- 1 Specify the name of the config map. 2 Specify the namespace for the config map. 3 In the data field, specify the registry names and the registry certificate content. Replace <port> with the port where the registry server is running; for example, 5000 . 4 Ensure that the data in the config map is defined by using | only instead of other methods, such as | - . If you use other methods, issues can occur when the pod reads the certificates. Patch the cluster-wide object, image.config.openshift.io to include the following specification: spec: additionalTrustedCA: - name: registry-config As a result of this patch, the control plane nodes can retrieve images from the private registry and the HyperShift Operator can extract the OpenShift Container Platform payload for hosted cluster deployments. The process to patch the object might take several minutes to be completed. 6.3.12.2. Adding the registry CA to the worker nodes for the hosted cluster In order for the data plane workers in the hosted cluster to be able to retrieve images from the private registry, you need to add the registry CA to the worker nodes. Procedure In the hc.spec.additionalTrustBundle file, add the following specification: spec: additionalTrustBundle: - name: user-ca-bundle 1 1 The user-ca-bundle entry is a config map that you create in the step. In the same namespace where the HostedCluster object is created, create the user-ca-bundle config map. The config map resembles the following example: apiVersion: v1 data: ca-bundle.crt: | // Registry1 CA -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- // Registry2 CA -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- // Registry3 CA -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- kind: ConfigMap metadata: name: user-ca-bundle namespace: <hosted_cluster_namespace> 1 1 Specify the namespace where the HostedCluster object is created. 6.3.13. Creating a hosted cluster on bare metal A hosted cluster is an OpenShift Container Platform cluster with its control plane and API endpoint hosted on a management cluster. The hosted cluster includes the control plane and its corresponding data plane. 6.3.13.1. Deploying hosted cluster objects Typically, the HyperShift Operator creates the HostedControlPlane namespace. However, in this case, you want to include all the objects before the HyperShift Operator begins to reconcile the HostedCluster object. Then, when the Operator starts the reconciliation process, it can find all of the objects in place. Procedure Create a YAML file with the following information about the namespaces: --- apiVersion: v1 kind: Namespace metadata: creationTimestamp: null name: <hosted_cluster_namespace>-<hosted_cluster_name> 1 spec: {} status: {} --- apiVersion: v1 kind: Namespace metadata: creationTimestamp: null name: <hosted_cluster_namespace> 2 spec: {} status: {} 1 Replace <hosted_cluster_name> with your hosted cluster. 2 Replace <hosted_cluster_namespace> with the name of your hosted cluster namespace. Create a YAML file with the following information about the config maps and secrets to include in the HostedCluster deployment: --- apiVersion: v1 data: ca-bundle.crt: | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- kind: ConfigMap metadata: name: user-ca-bundle namespace: <hosted_cluster_namespace> 1 --- apiVersion: v1 data: .dockerconfigjson: xxxxxxxxx kind: Secret metadata: creationTimestamp: null name: <hosted_cluster_name>-pull-secret 2 namespace: <hosted_cluster_namespace> 3 --- apiVersion: v1 kind: Secret metadata: name: sshkey-cluster-<hosted_cluster_name> 4 namespace: <hosted_cluster_namespace> 5 stringData: id_rsa.pub: ssh-rsa xxxxxxxxx --- apiVersion: v1 data: key: nTPtVBEt03owkrKhIdmSW8jrWRxU57KO/fnZa8oaG0Y= kind: Secret metadata: creationTimestamp: null name: <hosted_cluster_name>-etcd-encryption-key 6 namespace: <hosted_cluster_namespace> 7 type: Opaque 1 3 5 7 Replace <hosted_cluster_namespace> with the name of your hosted cluster namespace. 2 4 6 Replace <hosted_cluster_name> with your hosted cluster. Create a YAML file that contains the RBAC roles so that Assisted Service agents can be in the same HostedControlPlane namespace as the hosted control plane and still be managed by the cluster API: apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: creationTimestamp: null name: capi-provider-role namespace: <hosted_cluster_namespace>-<hosted_cluster_name> 1 2 rules: - apiGroups: - agent-install.openshift.io resources: - agents verbs: - '*' 1 Replace <hosted_cluster_namespace> with the name of your hosted cluster namespace. 2 Replace <hosted_cluster_name> with your hosted cluster. Create a YAML file with information about the HostedCluster object, replacing values as necessary: apiVersion: hypershift.openshift.io/v1beta1 kind: HostedCluster metadata: name: <hosted_cluster_name> 1 namespace: <hosted_cluster_namespace> 2 spec: additionalTrustBundle: name: "user-ca-bundle" olmCatalogPlacement: guest imageContentSources: 3 - source: quay.io/openshift-release-dev/ocp-v4.0-art-dev mirrors: - registry.<dns.base.domain.name>:5000/openshift/release 4 - source: quay.io/openshift-release-dev/ocp-release mirrors: - registry.<dns.base.domain.name>:5000/openshift/release-images 5 - mirrors: ... ... autoscaling: {} controllerAvailabilityPolicy: SingleReplica dns: baseDomain: <dns.base.domain.name> 6 etcd: managed: storage: persistentVolume: size: 8Gi restoreSnapshotURL: null type: PersistentVolume managementType: Managed fips: false networking: clusterNetwork: - cidr: 10.132.0.0/14 - cidr: fd01::/48 networkType: OVNKubernetes serviceNetwork: - cidr: 172.31.0.0/16 - cidr: fd02::/112 platform: agent: agentNamespace: <hosted_cluster_namespace>-<hosted_cluster_name> 7 8 type: Agent pullSecret: name: <hosted_cluster_name>-pull-secret 9 release: image: registry.<dns.base.domain.name>:5000/openshift/release-images:<4.x.y>-x86_64 10 11 secretEncryption: aescbc: activeKey: name: <hosted_cluster_name>-etcd-encryption-key 12 type: aescbc services: - service: APIServer servicePublishingStrategy: type: LoadBalancer - service: OAuthServer servicePublishingStrategy: type: Route - service: OIDC servicePublishingStrategy: type: Route - service: Konnectivity servicePublishingStrategy: type: Route - service: Ignition servicePublishingStrategy: type: Route sshKey: name: sshkey-cluster-<hosted_cluster_name> 13 status: controlPlaneEndpoint: host: "" port: 0 1 7 9 12 13 Replace <hosted_cluster_name> with your hosted cluster. 2 8 Replace <hosted_cluster_namespace> with the name of your hosted cluster namespace. 3 The imageContentSources section contains mirror references for user workloads within the hosted cluster. 4 5 6 10 Replace <dns.base.domain.name> with the DNS base domain name. 11 Replace <4.x.y> with the supported OpenShift Container Platform version you want to use. Add an annotation in the HostedCluster object that points to the HyperShift Operator release in the OpenShift Container Platform release: Obtain the image payload by entering the following command: USD oc adm release info \ registry.<dns.base.domain.name>:5000/openshift-release-dev/ocp-release:<4.x.y>-x86_64 \ | grep hypershift where <dns.base.domain.name> is the DNS base domain name and <4.x.y> is the supported OpenShift Container Platform version you want to use. Example output hypershift sha256:31149e3e5f8c5e5b5b100ff2d89975cf5f7a73801b2c06c639bf6648766117f8 By using the OpenShift Container Platform Images namespace, check the digest by entering the following command: podman pull registry.<dns.base.domain.name>:5000/openshift-release-dev/ocp-v4.0-art-dev@sha256:31149e3e5f8c5e5b5b100ff2d89975cf5f7a73801b2c06c639bf6648766117f8 where <dns.base.domain.name> is the DNS base domain name. Example output podman pull registry.dns.base.domain.name:5000/openshift/release@sha256:31149e3e5f8c5e5b5b100ff2d89975cf5f7a73801b2c06c639bf6648766117f8 Trying to pull registry.dns.base.domain.name:5000/openshift/release@sha256:31149e3e5f8c5e5b5b100ff2d89975cf5f7a73801b2c06c639bf6648766117f8... Getting image source signatures Copying blob d8190195889e skipped: already exists Copying blob c71d2589fba7 skipped: already exists Copying blob d4dc6e74b6ce skipped: already exists Copying blob 97da74cc6d8f skipped: already exists Copying blob b70007a560c9 done Copying config 3a62961e6e done Writing manifest to image destination Storing signatures 3a62961e6ed6edab46d5ec8429ff1f41d6bb68de51271f037c6cb8941a007fde The release image that is set in the HostedCluster object must use the digest rather than the tag; for example, quay.io/openshift-release-dev/ocp-release@sha256:e3ba11bd1e5e8ea5a0b36a75791c90f29afb0fdbe4125be4e48f69c76a5c47a0 . Create all of the objects that you defined in the YAML files by concatenating them into a file and applying them against the management cluster. To do so, enter the following command: USD oc apply -f 01-4.14-hosted_cluster-nodeport.yaml Example output NAME READY STATUS RESTARTS AGE capi-provider-5b57dbd6d5-pxlqc 1/1 Running 0 3m57s catalog-operator-9694884dd-m7zzv 2/2 Running 0 93s cluster-api-f98b9467c-9hfrq 1/1 Running 0 3m57s cluster-autoscaler-d7f95dd5-d8m5d 1/1 Running 0 93s cluster-image-registry-operator-5ff5944b4b-648ht 1/2 Running 0 93s cluster-network-operator-77b896ddc-wpkq8 1/1 Running 0 94s cluster-node-tuning-operator-84956cd484-4hfgf 1/1 Running 0 94s cluster-policy-controller-5fd8595d97-rhbwf 1/1 Running 0 95s cluster-storage-operator-54dcf584b5-xrnts 1/1 Running 0 93s cluster-version-operator-9c554b999-l22s7 1/1 Running 0 95s control-plane-operator-6fdc9c569-t7hr4 1/1 Running 0 3m57s csi-snapshot-controller-785c6dc77c-8ljmr 1/1 Running 0 77s csi-snapshot-controller-operator-7c6674bc5b-d9dtp 1/1 Running 0 93s csi-snapshot-webhook-5b8584875f-2492j 1/1 Running 0 77s dns-operator-6874b577f-9tc6b 1/1 Running 0 94s etcd-0 3/3 Running 0 3m39s hosted-cluster-config-operator-f5cf5c464-4nmbh 1/1 Running 0 93s ignition-server-6b689748fc-zdqzk 1/1 Running 0 95s ignition-server-proxy-54d4bb9b9b-6zkg7 1/1 Running 0 95s ingress-operator-6548dc758b-f9gtg 1/2 Running 0 94s konnectivity-agent-7767cdc6f5-tw782 1/1 Running 0 95s kube-apiserver-7b5799b6c8-9f5bp 4/4 Running 0 3m7s kube-controller-manager-5465bc4dd6-zpdlk 1/1 Running 0 44s kube-scheduler-5dd5f78b94-bbbck 1/1 Running 0 2m36s machine-approver-846c69f56-jxvfr 1/1 Running 0 92s oauth-openshift-79c7bf44bf-j975g 2/2 Running 0 62s olm-operator-767f9584c-4lcl2 2/2 Running 0 93s openshift-apiserver-5d469778c6-pl8tj 3/3 Running 0 2m36s openshift-controller-manager-6475fdff58-hl4f7 1/1 Running 0 95s openshift-oauth-apiserver-dbbc5cc5f-98574 2/2 Running 0 95s openshift-route-controller-manager-5f6997b48f-s9vdc 1/1 Running 0 95s packageserver-67c87d4d4f-kl7qh 2/2 Running 0 93s When the hosted cluster is available, the output looks like the following example. Example output NAMESPACE NAME VERSION KUBECONFIG PROGRESS AVAILABLE PROGRESSING MESSAGE clusters hosted-dual hosted-admin-kubeconfig Partial True False The hosted control plane is available 6.3.13.2. Creating a NodePool object for the hosted cluster A NodePool is a scalable set of worker nodes that is associated with a hosted cluster. NodePool machine architectures remain consistent within a specific pool and are independent of the machine architecture of the control plane. Procedure Create a YAML file with the following information about the NodePool object, replacing values as necessary: apiVersion: hypershift.openshift.io/v1beta1 kind: NodePool metadata: creationTimestamp: null name: <hosted_cluster_name> \ 1 namespace: <hosted_cluster_namespace> \ 2 spec: arch: amd64 clusterName: <hosted_cluster_name> management: autoRepair: false \ 3 upgradeType: InPlace \ 4 nodeDrainTimeout: 0s platform: type: Agent release: image: registry.<dns.base.domain.name>:5000/openshift/release-images:4.x.y-x86_64 \ 5 replicas: 2 6 status: replicas: 2 1 Replace <hosted_cluster_name> with your hosted cluster. 2 Replace <hosted_cluster_namespace> with the name of your hosted cluster namespace. 3 The autoRepair field is set to false because the node will not be re-created if it is removed. 4 The upgradeType is set to InPlace , which indicates that the same bare metal node is reused during an upgrade. 5 All of the nodes included in this NodePool are based on the following OpenShift Container Platform version: 4.x.y-x86_64 . Replace the <dns.base.domain.name> value with your DNS base domain name and the 4.x.y value with the supported OpenShift Container Platform version you want to use. 6 You can set the replicas value to 2 to create two node pool replicas in your hosted cluster. Create the NodePool object by entering the following command: USD oc apply -f 02-nodepool.yaml Example output NAMESPACE NAME CLUSTER DESIRED NODES CURRENT NODES AUTOSCALING AUTOREPAIR VERSION UPDATINGVERSION UPDATINGCONFIG MESSAGE clusters hosted-dual hosted 0 False False 4.x.y-x86_64 6.3.13.3. Creating an InfraEnv resource for the hosted cluster The InfraEnv resource is an Assisted Service object that includes essential details, such as the pullSecretRef and the sshAuthorizedKey . Those details are used to create the Red Hat Enterprise Linux CoreOS (RHCOS) boot image that is customized for the hosted cluster. You can host more than one InfraEnv resource, and each one can adopt certain types of hosts. For example, you might want to divide your server farm between a host that has greater RAM capacity. Procedure Create a YAML file with the following information about the InfraEnv resource, replacing values as necessary: apiVersion: agent-install.openshift.io/v1beta1 kind: InfraEnv metadata: name: <hosted_cluster_name> namespace: <hosted-cluster-namespace>-<hosted_cluster_name> 1 2 spec: pullSecretRef: 3 name: pull-secret sshAuthorizedKey: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDk7ICaUE+/k4zTpxLk4+xFdHi4ZuDi5qjeF52afsNkw0w/glILHhwpL5gnp5WkRuL8GwJuZ1VqLC9EKrdmegn4MrmUlq7WTsP0VFOZFBfq2XRUxo1wrRdor2z0Bbh93ytR+ZsDbbLlGngXaMa0Vbt+z74FqlcajbHTZ6zBmTpBVq5RHtDPgKITdpE1fongp7+ZXQNBlkaavaqv8bnyrP4BWahLP4iO9/xJF9lQYboYwEEDzmnKLMW1VtCE6nJzEgWCufACTbxpNS7GvKtoHT/OVzw8ArEXhZXQUS1UY8zKsX2iXwmyhw5Sj6YboA8WICs4z+TrFP89LmxXY0j6536TQFyRz1iB4WWvCbH5n6W+ABV2e8ssJB1AmEy8QYNwpJQJNpSxzoKBjI73XxvPYYC/IjPFMySwZqrSZCkJYqQ023ySkaQxWZT7in4KeMu7eS2tC+Kn4deJ7KwwUycx8n6RHMeD8Qg9flTHCv3gmab8JKZJqN3hW1D378JuvmIX4V0= 4 1 Replace <hosted_cluster_name> with your hosted cluster. 2 Replace <hosted_cluster_namespace> with the name of your hosted cluster namespace. 3 The pullSecretRef refers to the config map reference in the same namespace as the InfraEnv , where the pull secret is used. 4 The sshAuthorizedKey represents the SSH public key that is placed in the boot image. The SSH key allows access to the worker nodes as the core user. Create the InfraEnv resource by entering the following command: USD oc apply -f 03-infraenv.yaml Example output 6.3.13.4. Creating worker nodes for the hosted cluster If you are working on a bare metal platform, creating worker nodes is crucial to ensure that the details in the BareMetalHost are correctly configured. If you are working with virtual machines, you can complete the following steps to create empty worker nodes for the Metal3 Operator to consume. To do so, you use the kcli tool. Procedure If this is not your first attempt to create worker nodes, you must first delete your setup. To do so, delete the plan by entering the following command: USD kcli delete plan <hosted_cluster_name> 1 1 Replace <hosted_cluster_name> with the name of your hosted cluster. When you are prompted to confirm whether you want to delete the plan, type y . Confirm that you see a message stating that the plan was deleted. Create the virtual machines by entering the following commands: Enter the following command to create the first virtual machine: USD kcli create vm \ -P start=False \ 1 -P uefi_legacy=true \ 2 -P plan=<hosted_cluster_name> \ 3 -P memory=8192 -P numcpus=16 \ 4 -P disks=[200,200] \ 5 -P nets=["{\"name\": \"<network>\", \"mac\": \"aa:aa:aa:aa:11:01\"}"] \ 6 -P uuid=aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa1101 \ -P name=<hosted_cluster_name>-worker0 7 1 Include start=False if you do not want the virtual machine (VM) to automatically start upon creation. 2 Include uefi_legacy=true to indicate that you will use UEFI legacy boot to ensure compatibility with UEFI implementations. 3 Replace <hosted_cluster_name> with the name of your hosted cluster. The plan=<hosted_cluster_name> statement indicates the plan name, which identifies a group of machines as a cluster. 4 Include the memory=8192 and numcpus=16 parameters to specify the resources for the VM, including the RAM and CPU. 5 Include disks=[200,200] to indicate that you are creating two thin-provisioned disks in the VM. 6 Include nets=[{"name": "<network>", "mac": "aa:aa:aa:aa:02:13"}] to provide network details, including the network name to connect to, the type of network ( ipv4 , ipv6 , or dual ), and the MAC address of the primary interface. 7 Replace <hosted_cluster_name> with the name of your hosted cluster. Enter the following command to create the second virtual machine: USD kcli create vm \ -P start=False \ 1 -P uefi_legacy=true \ 2 -P plan=<hosted_cluster_name> \ 3 -P memory=8192 -P numcpus=16 \ 4 -P disks=[200,200] \ 5 -P nets=["{\"name\": \"<network>\", \"mac\": \"aa:aa:aa:aa:11:02\"}"] \ 6 -P uuid=aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa1102 -P name=<hosted_cluster_name>-worker1 7 1 Include start=False if you do not want the virtual machine (VM) to automatically start upon creation. 2 Include uefi_legacy=true to indicate that you will use UEFI legacy boot to ensure compatibility with UEFI implementations. 3 Replace <hosted_cluster_name> with the name of your hosted cluster. The plan=<hosted_cluster_name> statement indicates the plan name, which identifies a group of machines as a cluster. 4 Include the memory=8192 and numcpus=16 parameters to specify the resources for the VM, including the RAM and CPU. 5 Include disks=[200,200] to indicate that you are creating two thin-provisioned disks in the VM. 6 Include nets=[{"name": "<network>", "mac": "aa:aa:aa:aa:02:13"}] to provide network details, including the network name to connect to, the type of network ( ipv4 , ipv6 , or dual ), and the MAC address of the primary interface. 7 Replace <hosted_cluster_name> with the name of your hosted cluster. Enter the following command to create the third virtual machine: USD kcli create vm \ -P start=False \ 1 -P uefi_legacy=true \ 2 -P plan=<hosted_cluster_name> \ 3 -P memory=8192 -P numcpus=16 \ 4 -P disks=[200,200] \ 5 -P nets=["{\"name\": \"<network>\", \"mac\": \"aa:aa:aa:aa:11:03\"}"] \ 6 -P uuid=aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa1103 -P name=<hosted_cluster_name>-worker2 7 1 Include start=False if you do not want the virtual machine (VM) to automatically start upon creation. 2 Include uefi_legacy=true to indicate that you will use UEFI legacy boot to ensure compatibility with UEFI implementations. 3 Replace <hosted_cluster_name> with the name of your hosted cluster. The plan=<hosted_cluster_name> statement indicates the plan name, which identifies a group of machines as a cluster. 4 Include the memory=8192 and numcpus=16 parameters to specify the resources for the VM, including the RAM and CPU. 5 Include disks=[200,200] to indicate that you are creating two thin-provisioned disks in the VM. 6 Include nets=[{"name": "<network>", "mac": "aa:aa:aa:aa:02:13"}] to provide network details, including the network name to connect to, the type of network ( ipv4 , ipv6 , or dual ), and the MAC address of the primary interface. 7 Replace <hosted_cluster_name> with the name of your hosted cluster. Enter the restart ksushy command to restart the ksushy tool to ensure that the tool detects the VMs that you added: USD systemctl restart ksushy Example output +---------------------+--------+-------------------+----------------------------------------------------+-------------+---------+ | Name | Status | Ip | Source | Plan | Profile | +---------------------+--------+-------------------+----------------------------------------------------+-------------+---------+ | hosted-worker0 | down | | | hosted-dual | kvirt | | hosted-worker1 | down | | | hosted-dual | kvirt | | hosted-worker2 | down | | | hosted-dual | kvirt | +---------------------+--------+-------------------+----------------------------------------------------+-------------+---------+ 6.3.13.5. Creating bare metal hosts for the hosted cluster A bare metal host is an openshift-machine-api object that encompasses physical and logical details so that it can be identified by a Metal3 Operator. Those details are associated with other Assisted Service objects, known as agents . Prerequisites Before you create the bare metal host and destination nodes, you must have the destination machines ready. Procedure To create a bare metal host, complete the following steps: Create a YAML file with the following information: Because you have at least one secret that holds the bare metal host credentials, you need to create at least two objects for each worker node. apiVersion: v1 kind: Secret metadata: name: <hosted_cluster_name>-worker0-bmc-secret \ 1 namespace: <hosted_cluster_namespace>-<hosted_cluster_name> \ 2 data: password: YWRtaW4= \ 3 username: YWRtaW4= \ 4 type: Opaque # ... apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: <hosted_cluster_name>-worker0 namespace: <hosted_cluster_namespace>-<hosted_cluster_name> \ 5 labels: infraenvs.agent-install.openshift.io: <hosted_cluster_name> \ 6 annotations: inspect.metal3.io: disabled bmac.agent-install.openshift.io/hostname: <hosted_cluster_name>-worker0 \ 7 spec: automatedCleaningMode: disabled \ 8 bmc: disableCertificateVerification: true \ 9 address: redfish-virtualmedia://[192.168.126.1]:9000/redfish/v1/Systems/local/<hosted_cluster_name>-worker0 \ 10 credentialsName: <hosted_cluster_name>-worker0-bmc-secret \ 11 bootMACAddress: aa:aa:aa:aa:02:11 \ 12 online: true 13 1 Replace <hosted_cluster_name> with your hosted cluster. 2 5 Replace <hosted_cluster_name> with your hosted cluster. Replace <hosted_cluster_namespace> with the name of your hosted cluster namespace. 3 Specify the password of the baseboard management controller (BMC) in Base64 format. 4 Specify the user name of the BMC in Base64 format. 6 Replace <hosted_cluster_name> with your hosted cluster. The infraenvs.agent-install.openshift.io field serves as the link between the Assisted Installer and the BareMetalHost objects. 7 Replace <hosted_cluster_name> with your hosted cluster. The bmac.agent-install.openshift.io/hostname field represents the node name that is adopted during deployment. 8 The automatedCleaningMode field prevents the node from being erased by the Metal3 Operator. 9 The disableCertificateVerification field is set to true to bypass certificate validation from the client. 10 Replace <hosted_cluster_name> with your hosted cluster. The address field denotes the BMC address of the worker node. 11 Replace <hosted_cluster_name> with your hosted cluster. The credentialsName field points to the secret where the user and password credentials are stored. 12 The bootMACAddress field indicates the interface MAC address that the node starts from. 13 The online field defines the state of the node after the BareMetalHost object is created. Deploy the BareMetalHost object by entering the following command: USD oc apply -f 04-bmh.yaml During the process, you can view the following output: This output indicates that the process is trying to reach the nodes: Example output NAMESPACE NAME STATE CONSUMER ONLINE ERROR AGE clusters-hosted hosted-worker0 registering true 2s clusters-hosted hosted-worker1 registering true 2s clusters-hosted hosted-worker2 registering true 2s This output indicates that the nodes are starting: Example output NAMESPACE NAME STATE CONSUMER ONLINE ERROR AGE clusters-hosted hosted-worker0 provisioning true 16s clusters-hosted hosted-worker1 provisioning true 16s clusters-hosted hosted-worker2 provisioning true 16s This output indicates that the nodes started successfully: Example output NAMESPACE NAME STATE CONSUMER ONLINE ERROR AGE clusters-hosted hosted-worker0 provisioned true 67s clusters-hosted hosted-worker1 provisioned true 67s clusters-hosted hosted-worker2 provisioned true 67s After the nodes start, notice the agents in the namespace, as shown in this example: Example output NAMESPACE NAME CLUSTER APPROVED ROLE STAGE clusters-hosted aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa0411 true auto-assign clusters-hosted aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa0412 true auto-assign clusters-hosted aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa0413 true auto-assign The agents represent nodes that are available for installation. To assign the nodes to a hosted cluster, scale up the node pool. 6.3.13.6. Scaling up the node pool After you create the bare metal hosts, their statuses change from Registering to Provisioning to Provisioned . The nodes start with the LiveISO of the agent and a default pod that is named agent . That agent is responsible for receiving instructions from the Assisted Service Operator to install the OpenShift Container Platform payload. Procedure To scale up the node pool, enter the following command: USD oc -n <hosted_cluster_namespace> scale nodepool <hosted_cluster_name> \ --replicas 3 where: <hosted_cluster_namespace> is the name of the hosted cluster namespace. <hosted_cluster_name> is the name of the hosted cluster. After the scaling process is complete, notice that the agents are assigned to a hosted cluster: Example output NAMESPACE NAME CLUSTER APPROVED ROLE STAGE clusters-hosted aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa0411 hosted true auto-assign clusters-hosted aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa0412 hosted true auto-assign clusters-hosted aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa0413 hosted true auto-assign Also notice that the node pool replicas are set: Example output NAMESPACE NAME CLUSTER DESIRED NODES CURRENT NODES AUTOSCALING AUTOREPAIR VERSION UPDATINGVERSION UPDATINGCONFIG MESSAGE clusters hosted hosted 3 False False <4.x.y>-x86_64 Minimum availability requires 3 replicas, current 0 available Replace <4.x.y> with the supported OpenShift Container Platform version that you want to use. Wait until the nodes join the cluster. During the process, the agents provide updates on their stage and status. 6.4. Deploying hosted control planes on IBM Z in a disconnected environment Hosted control planes deployments in disconnected environments function differently than in a standalone OpenShift Container Platform. Hosted control planes involves two distinct environments: Control plane: Located in the management cluster, where the hosted control planes pods are run and managed by the Control Plane Operator. Data plane: Located in the workers of the hosted cluster, where the workload and a few other pods run, managed by the Hosted Cluster Config Operator. The ImageContentSourcePolicy (ICSP) custom resource for the data plane is managed through the ImageContentSources API in the hosted cluster manifest. For the control plane, ICSP objects are managed in the management cluster. These objects are parsed by the HyperShift Operator and are shared as registry-overrides entries with the Control Plane Operator. These entries are injected into any one of the available deployments in the hosted control planes namespace as an argument. To work with disconnected registries in the hosted control planes, you must first create the appropriate ICSP in the management cluster. Then, to deploy disconnected workloads in the data plane, you need to add the entries that you want into the ImageContentSources field in the hosted cluster manifest. 6.4.1. Prerequisites to deploy hosted control planes on IBM Z in a disconnected environment A mirror registry. For more information, see "Creating a mirror registry with mirror registry for Red Hat OpenShift". A mirrored image for a disconnected installation. For more information, see "Mirroring images for a disconnected installation using the oc-mirror plugin". Additional resources Creating a mirror registry with mirror registry for Red Hat OpenShift Mirroring images for a disconnected installation by using the oc-mirror plugin v2 6.4.2. Adding credentials and the registry certificate authority to the management cluster To pull the mirror registry images from the management cluster, you must first add credentials and the certificate authority of the mirror registry to the management cluster. Use the following procedure: Procedure Create a ConfigMap with the certificate of the mirror registry by running the following command: USD oc apply -f registry-config.yaml Example registry-config.yaml file apiVersion: v1 kind: ConfigMap metadata: name: registry-config namespace: openshift-config data: <mirror_registry>: | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- Patch the image.config.openshift.io cluster-wide object to include the following entries: spec: additionalTrustedCA: - name: registry-config Update the management cluster pull secret to add the credentials of the mirror registry. Fetch the pull secret from the cluster in a JSON format by running the following command: USD oc get secret/pull-secret -n openshift-config -o json \ | jq -r '.data.".dockerconfigjson"' \ | base64 -d > authfile Edit the fetched secret JSON file to include a section with the credentials of the certificate authority: "auths": { "<mirror_registry>": { 1 "auth": "<credentials>", 2 "email": "[email protected]" } }, 1 Provide the name of the mirror registry. 2 Provide the credentials for the mirror registry to allow fetch of images. Update the pull secret on the cluster by running the following command: USD oc set data secret/pull-secret -n openshift-config \ --from-file=.dockerconfigjson=authfile 6.4.3. Update the registry certificate authority in the AgentServiceConfig resource with the mirror registry When you use a mirror registry for images, agents need to trust the registry's certificate to securely pull images. You can add the certificate authority of the mirror registry to the AgentServiceConfig custom resource by creating a ConfigMap . Prerequisites You must have installed multicluster engine for Kubernetes Operator. Procedure In the same namespace where you installed multicluster engine Operator, create a ConfigMap resource with the mirror registry details. This ConfigMap resource ensures that you grant the hosted cluster workers the capability to retrieve images from the mirror registry. Example ConfigMap file apiVersion: v1 kind: ConfigMap metadata: name: mirror-config namespace: multicluster-engine labels: app: assisted-service data: ca-bundle.crt: | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- registries.conf: | [[registry]] location = "registry.stage.redhat.io" insecure = false blocked = false mirror-by-digest-only = true prefix = "" [[registry.mirror]] location = "<mirror_registry>" insecure = false [[registry]] location = "registry.redhat.io/multicluster-engine" insecure = false blocked = false mirror-by-digest-only = true prefix = "" [[registry.mirror]] location = "<mirror_registry>/multicluster-engine" 1 insecure = false 1 Where: <mirror_registry> is the name of the mirror registry. Patch the AgentServiceConfig resource to include the ConfigMap resource that you created. If the AgentServiceConfig resource is not present, create the AgentServiceConfig resource with the following content embedded into it: spec: mirrorRegistryRef: name: mirror-config 6.4.4. Adding the registry certificate authority to the hosted cluster When you are deploying hosted control planes on IBM Z in a disconnected environment, include the additional-trust-bundle and image-content-sources resources. Those resources allow the hosted cluster to inject the certificate authority into the data plane workers so that the images are pulled from the registry. Create the icsp.yaml file with the image-content-sources information. The image-content-sources information is available in the ImageContentSourcePolicy YAML file that is generated after you mirror the images by using oc-mirror . Example ImageContentSourcePolicy file # cat icsp.yaml - mirrors: - <mirror_registry>/openshift/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev - mirrors: - <mirror_registry>/openshift/release-images source: quay.io/openshift-release-dev/ocp-release Create a hosted cluster and provide the additional-trust-bundle certificate to update the compute nodes with the certificates as in the following example: USD hcp create cluster agent \ --name=<hosted_cluster_name> \ 1 --pull-secret=<path_to_pull_secret> \ 2 --agent-namespace=<hosted_control_plane_namespace> \ 3 --base-domain=<basedomain> \ 4 --api-server-address=api.<hosted_cluster_name>.<basedomain> \ --etcd-storage-class=<etcd_storage_class> \ 5 --ssh-key <path_to_ssh_public_key> \ 6 --namespace <hosted_cluster_namespace> \ 7 --control-plane-availability-policy SingleReplica \ --release-image=quay.io/openshift-release-dev/ocp-release:<ocp_release_image> \ 8 --additional-trust-bundle <path for cert> \ 9 --image-content-sources icsp.yaml 1 Replace <hosted_cluster_name> with the name of your hosted cluster. 2 Replace the path to your pull secret, for example, /user/name/pullsecret . 3 Replace <hosted_control_plane_namespace> with the name of the hosted control plane namespace, for example, clusters-hosted . 4 Replace the name with your base domain, for example, example.com . 5 Replace the etcd storage class name, for example, lvm-storageclass . 6 Replace the path to your SSH public key. The default file path is ~/.ssh/id_rsa.pub . 7 8 Replace with the supported OpenShift Container Platform version that you want to use, for example, 4.18.0-multi . 9 Replace the path to Certificate Authority of mirror registry. 6.5. Monitoring user workload in a disconnected environment The hypershift-addon managed cluster add-on enables the --enable-uwm-telemetry-remote-write option in the HyperShift Operator. By enabling that option, you ensure that user workload monitoring is enabled and that it can remotely write telemetry metrics from control planes. 6.5.1. Resolving user workload monitoring issues If you installed multicluster engine Operator on OpenShift Container Platform clusters that are not connected to the internet, when you try to run the user workload monitoring feature of the HyperShift Operator by entering the following command, the feature fails with an error: USD oc get events -n hypershift Example error LAST SEEN TYPE REASON OBJECT MESSAGE 4m46s Warning ReconcileError deployment/operator Failed to ensure UWM telemetry remote write: cannot get telemeter client secret: Secret "telemeter-client" not found To resolve the error, you must disable the user workload monitoring option by creating a config map in the local-cluster namespace. You can create the config map either before or after you enable the add-on. The add-on agent reconfigures the HyperShift Operator. Procedure Create the following config map: kind: ConfigMap apiVersion: v1 metadata: name: hypershift-operator-install-flags namespace: local-cluster data: installFlagsToAdd: "" installFlagsToRemove: "--enable-uwm-telemetry-remote-write" Apply the config map by running the following command: USD oc apply -f <filename>.yaml 6.5.2. Verifying the status of the hosted control plane feature The hosted control plane feature is enabled by default. Procedure If the feature is disabled and you want to enable it, enter the following command. Replace <multiclusterengine> with the name of your multicluster engine Operator instance: USD oc patch mce <multiclusterengine> --type=merge -p \ '{"spec":{"overrides":{"components":[{"name":"hypershift","enabled": true}]}}}' When you enable the feature, the hypershift-addon managed cluster add-on is installed in the local-cluster managed cluster, and the add-on agent installs the HyperShift Operator on the multicluster engine Operator hub cluster. Confirm that the hypershift-addon managed cluster add-on is installed by entering the following command: USD oc get managedclusteraddons -n local-cluster hypershift-addon Example output To avoid a timeout during this process, enter the following commands: USD oc wait --for=condition=Degraded=True managedclusteraddons/hypershift-addon \ -n local-cluster --timeout=5m USD oc wait --for=condition=Available=True managedclusteraddons/hypershift-addon \ -n local-cluster --timeout=5m When the process is complete, the hypershift-addon managed cluster add-on and the HyperShift Operator are installed, and the local-cluster managed cluster is available to host and manage hosted clusters. 6.5.3. Configuring the hypershift-addon managed cluster add-on to run on an infrastructure node By default, no node placement preference is specified for the hypershift-addon managed cluster add-on. Consider running the add-ons on the infrastructure nodes, because by doing so, you can prevent incurring billing costs against subscription counts and separate maintenance and management tasks. Procedure Log in to the hub cluster. Open the hypershift-addon-deploy-config add-on deployment configuration specification for editing by entering the following command: USD oc edit addondeploymentconfig hypershift-addon-deploy-config \ -n multicluster-engine Add the nodePlacement field to the specification, as shown in the following example: apiVersion: addon.open-cluster-management.io/v1alpha1 kind: AddOnDeploymentConfig metadata: name: hypershift-addon-deploy-config namespace: multicluster-engine spec: nodePlacement: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra operator: Exists Save the changes. The hypershift-addon managed cluster add-on is deployed on an infrastructure node for new and existing managed clusters.
[ "apiVersion: mirror.openshift.io/v2alpha1 kind: ImageSetConfiguration mirror: platform: channels: - name: candidate-4.18 minVersion: <4.x.y-build> 1 maxVersion: <4.x.y-build> 2 type: ocp kubeVirtContainer: true 3 graph: true additionalImages: 4 - name: quay.io/karmab/origin-keepalived-ipfailover:latest - name: quay.io/karmab/kubectl:latest - name: quay.io/karmab/haproxy:latest - name: quay.io/karmab/mdns-publisher:latest - name: quay.io/karmab/origin-coredns:latest - name: quay.io/karmab/curl:latest - name: quay.io/karmab/kcli:latest - name: quay.io/user-name/trbsht:latest - name: quay.io/user-name/hypershift:BMSelfManage-v4.18 - name: registry.redhat.io/openshift4/ose-kube-rbac-proxy:v4.10 operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.18 packages: - name: lvms-operator - name: local-storage-operator - name: odf-csi-addons-operator - name: odf-operator - name: mcg-operator - name: ocs-operator - name: metallb-operator - name: kubevirt-hyperconverged 5", "oc-mirror --v2 --config imagesetconfig.yaml --workspace file://mirror-file docker://<registry>", "apiVersion: mirror.openshift.io/v2alpha1 kind: ImageSetConfiguration mirror: platform: graph: true release: registry.ci.openshift.org/ocp/release:<4.x.y-build> 1 kubeVirtContainer: true 2", "oc mirror -c imagesetconfig.yaml --workspace file://<file_path> docker://<mirror_registry_url> --v2", "oc mirror -c imagesetconfig.yaml file://<file_path> --v2", "oc mirror -c imagesetconfig.yaml --from file://<file_path> docker://<mirror_registry_url> --v2", "oc get packagemanifest", "oc apply -f oc-mirror-workspace/results-XXXXXX/imageContentSourcePolicy.yaml", "oc apply -f catalogSource-XXXXXXXX-index.yaml", "apiVersion: v1 kind: ConfigMap metadata: name: <config_map_name> 1 namespace: <config_map_namespace> 2 data: 3 <registry_name>..<port>: | 4 -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- <registry_name>..<port>: | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- <registry_name>..<port>: | -----BEGIN CERTIFICATE----- -----END CERTIFICATE-----", "spec: additionalTrustedCA: - name: registry-config", "spec: additionalTrustBundle: - name: user-ca-bundle 1", "apiVersion: v1 data: ca-bundle.crt: | // Registry1 CA -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- // Registry2 CA -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- // Registry3 CA -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- kind: ConfigMap metadata: name: user-ca-bundle namespace: <hosted_cluster_namespace> 1", "hcp create cluster kubevirt --name <hosted_cluster_name> \\ 1 --node-pool-replicas <node_pool_replica_count> \\ 2 --pull-secret <path_to_pull_secret> \\ 3 --memory <value_for_memory> \\ 4 --cores <value_for_cpu> \\ 5 --etcd-storage-class=<etcd_storage_class> 6", "oc -n clusters-<hosted-cluster-name> get pods", "NAME READY STATUS RESTARTS AGE capi-provider-5cc7b74f47-n5gkr 1/1 Running 0 3m catalog-operator-5f799567b7-fd6jw 2/2 Running 0 69s certified-operators-catalog-784b9899f9-mrp6p 1/1 Running 0 66s cluster-api-6bbc867966-l4dwl 1/1 Running 0 66s . . . redhat-operators-catalog-9d5fd4d44-z8qqk 1/1 Running 0 66s", "oc get --namespace clusters hostedclusters", "NAMESPACE NAME VERSION KUBECONFIG PROGRESS AVAILABLE PROGRESSING MESSAGE clusters example <4.x.0> example-admin-kubeconfig Completed True False The hosted control plane is available", "*.apps.mgmt-cluster.example.com", "*.apps.guest.apps.mgmt-cluster.example.com", "oc patch ingresscontroller -n openshift-ingress-operator default --type=json -p '[{ \"op\": \"add\", \"path\": \"/spec/routeAdmission\", \"value\": {wildcardPolicy: \"WildcardsAllowed\"}}]'", "hcp create cluster kubevirt --name <hosted_cluster_name> \\ 1 --node-pool-replicas <worker_count> \\ 2 --pull-secret <path_to_pull_secret> \\ 3 --memory <value_for_memory> \\ 4 --cores <value_for_cpu> \\ 5 --base-domain <basedomain> 6", "oc get --namespace clusters hostedclusters", "NAME VERSION KUBECONFIG PROGRESS AVAILABLE PROGRESSING MESSAGE example example-admin-kubeconfig Partial True False The hosted control plane is available", "hcp create kubeconfig --name <hosted_cluster_name> > <hosted_cluster_name>-kubeconfig", "oc --kubeconfig <hosted_cluster_name>-kubeconfig get co", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE console <4.x.0> False False False 30m RouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.example.hypershift.lab): Get \"https://console-openshift-console.apps.example.hypershift.lab\": dial tcp: lookup console-openshift-console.apps.example.hypershift.lab on 172.31.0.10:53: no such host ingress <4.x.0> True False True 28m The \"default\" ingress controller reports Degraded=True: DegradedConditions: One or more other status conditions indicate a degraded state: CanaryChecksSucceeding=False (CanaryChecksRepetitiveFailures: Canary route checks for the default ingress controller are failing)", "oc --kubeconfig <hosted_cluster_name>-kubeconfig get services -n openshift-ingress router-nodeport-default -o jsonpath='{.spec.ports[?(@.name==\"http\")].nodePort}'", "oc --kubeconfig <hosted_cluster_name>-kubeconfig get services -n openshift-ingress router-nodeport-default -o jsonpath='{.spec.ports[?(@.name==\"https\")].nodePort}'", "apply -f - apiVersion: v1 kind: Service metadata: labels: app: <hosted_cluster_name> name: <hosted_cluster_name>-apps namespace: clusters-<hosted_cluster_name> spec: ports: - name: https-443 port: 443 protocol: TCP targetPort: <https_node_port> 1 - name: http-80 port: 80 protocol: TCP targetPort: <http-node-port> 2 selector: kubevirt.io: virt-launcher type: LoadBalancer", "oc -n clusters-<hosted_cluster_name> get service <hosted-cluster-name>-apps -o jsonpath='{.status.loadBalancer.ingress[0].ip}'", "192.168.20.30", "*.apps.<hosted_cluster_name\\>.<base_domain\\>.", "dig +short test.apps.example.hypershift.lab 192.168.20.30", "oc get --namespace clusters hostedclusters", "NAME VERSION KUBECONFIG PROGRESS AVAILABLE PROGRESSING MESSAGE example <4.x.0> example-admin-kubeconfig Completed True False The hosted control plane is available", "export KUBECONFIG=/root/.kcli/clusters/hub-ipv4/auth/kubeconfig", "watch \"oc get pod -n hypershift;echo;echo; oc get pod -n clusters-hosted-ipv4;echo;echo; oc get bmh -A;echo;echo; oc get agent -A;echo;echo; oc get infraenv -A;echo;echo; oc get hostedcluster -A;echo;echo; oc get nodepool -A;echo;echo;\"", "oc get secret -n clusters-hosted-ipv4 admin-kubeconfig -o jsonpath='{.data.kubeconfig}' | base64 -d > /root/hc_admin_kubeconfig.yaml", "export KUBECONFIG=/root/hc_admin_kubeconfig.yaml", "watch \"oc get clusterversion,nodes,co\"", "apiVersion: v1 kind: ConfigMap metadata: name: custom-registries namespace: multicluster-engine labels: app: assisted-service data: ca-bundle.crt: | -----BEGIN CERTIFICATE----- # -----END CERTIFICATE----- registries.conf: | unqualified-search-registries = [\"registry.access.redhat.com\", \"docker.io\"] [[registry]] prefix = \"\" location = \"registry.redhat.io/openshift4\" mirror-by-digest-only = true [[registry.mirror]] location = \"registry.ocp-edge-cluster-0.qe.lab.redhat.com:5000/openshift4\" [[registry]] prefix = \"\" location = \"registry.redhat.io/rhacm2\" mirror-by-digest-only = true", "oc adm release info <tagged_openshift_release_image> | grep \"Pull From\"", "Pull From: quay.io/openshift-release-dev/ocp-release@sha256:69d1292f64a2b67227c5592c1a7d499c7d00376e498634ff8e1946bc9ccdddfe", "sudo dnf install dnsmasq radvd vim golang podman bind-utils net-tools httpd-tools tree htop strace tmux -y", "systemctl enable --now podman", "sudo yum -y install libvirt libvirt-daemon-driver-qemu qemu-kvm", "sudo usermod -aG qemu,libvirt USD(id -un)", "sudo newgrp libvirt", "sudo systemctl enable --now libvirtd", "sudo dnf -y copr enable karmab/kcli", "sudo dnf -y install kcli", "sudo kcli create pool -p /var/lib/libvirt/images default", "kcli create host kvm -H 127.0.0.1 local", "sudo setfacl -m u:USD(id -un):rwx /var/lib/libvirt/images", "kcli create network -c 192.168.122.0/24 default", "#!/bin/bash export IP=\"192.168.126.1\" 1 export BASE_RESOLV_CONF=\"/run/NetworkManager/resolv.conf\" if ! [[ `grep -q \"USDIP\" /etc/resolv.conf` ]]; then export TMP_FILE=USD(mktemp /etc/forcedns_resolv.conf.XXXXXX) cp USDBASE_RESOLV_CONF USDTMP_FILE chmod --reference=USDBASE_RESOLV_CONF USDTMP_FILE sed -i -e \"s/dns.base.domain.name//\" -e \"s/search /& dns.base.domain.name /\" -e \"0,/nameserver/s/nameserver/& USDIP\\n&/\" USDTMP_FILE 2 mv USDTMP_FILE /etc/resolv.conf fi echo \"ok\"", "chmod 755 /etc/NetworkManager/dispatcher.d/forcedns", "sudo dnf install python3-pyOpenSSL.noarch python3-cherrypy -y", "kcli create sushy-service --ssl --ipv6 --port 9000", "sudo systemctl daemon-reload", "systemctl enable --now ksushy", "systemctl status ksushy", "sed -i s/^SELINUX=.*USD/SELINUX=permissive/ /etc/selinux/config; setenforce 0", "systemctl disable --now firewalld", "systemctl restart libvirtd", "systemctl enable --now libvirtd", "api.example.krnl.es. IN A 192.168.122.20 api.example.krnl.es. IN A 192.168.122.21 api.example.krnl.es. IN A 192.168.122.22 api-int.example.krnl.es. IN A 192.168.122.20 api-int.example.krnl.es. IN A 192.168.122.21 api-int.example.krnl.es. IN A 192.168.122.22 `*`.apps.example.krnl.es. IN A 192.168.122.23", "api.example.krnl.es. IN A 2620:52:0:1306::5 api.example.krnl.es. IN A 2620:52:0:1306::6 api.example.krnl.es. IN A 2620:52:0:1306::7 api-int.example.krnl.es. IN A 2620:52:0:1306::5 api-int.example.krnl.es. IN A 2620:52:0:1306::6 api-int.example.krnl.es. IN A 2620:52:0:1306::7 `*`.apps.example.krnl.es. IN A 2620:52:0:1306::10", "host-record=api-int.hub-dual.dns.base.domain.name,192.168.126.10 host-record=api.hub-dual.dns.base.domain.name,192.168.126.10 address=/apps.hub-dual.dns.base.domain.name/192.168.126.11 dhcp-host=aa:aa:aa:aa:10:01,ocp-master-0,192.168.126.20 dhcp-host=aa:aa:aa:aa:10:02,ocp-master-1,192.168.126.21 dhcp-host=aa:aa:aa:aa:10:03,ocp-master-2,192.168.126.22 dhcp-host=aa:aa:aa:aa:10:06,ocp-installer,192.168.126.25 dhcp-host=aa:aa:aa:aa:10:07,ocp-bootstrap,192.168.126.26 host-record=api-int.hub-dual.dns.base.domain.name,2620:52:0:1306::2 host-record=api.hub-dual.dns.base.domain.name,2620:52:0:1306::2 address=/apps.hub-dual.dns.base.domain.name/2620:52:0:1306::3 dhcp-host=aa:aa:aa:aa:10:01,ocp-master-0,[2620:52:0:1306::5] dhcp-host=aa:aa:aa:aa:10:02,ocp-master-1,[2620:52:0:1306::6] dhcp-host=aa:aa:aa:aa:10:03,ocp-master-2,[2620:52:0:1306::7] dhcp-host=aa:aa:aa:aa:10:06,ocp-installer,[2620:52:0:1306::8] dhcp-host=aa:aa:aa:aa:10:07,ocp-bootstrap,[2620:52:0:1306::9]", "#!/usr/bin/env bash set -euo pipefail PRIMARY_NIC=USD(ls -1 /sys/class/net | grep -v podman | head -1) export PATH=/root/bin:USDPATH export PULL_SECRET=\"/root/baremetal/hub/openshift_pull.json\" 1 if [[ ! -f USDPULL_SECRET ]];then echo \"Pull Secret not found, exiting...\" exit 1 fi dnf -y install podman httpd httpd-tools jq skopeo libseccomp-devel export IP=USD(ip -o addr show USDPRIMARY_NIC | head -1 | awk '{print USD4}' | cut -d'/' -f1) REGISTRY_NAME=registry.USD(hostname --long) REGISTRY_USER=dummy REGISTRY_PASSWORD=dummy KEY=USD(echo -n USDREGISTRY_USER:USDREGISTRY_PASSWORD | base64) echo \"{\\\"auths\\\": {\\\"USDREGISTRY_NAME:5000\\\": {\\\"auth\\\": \\\"USDKEY\\\", \\\"email\\\": \\\"[email protected]\\\"}}}\" > /root/disconnected_pull.json mv USD{PULL_SECRET} /root/openshift_pull.json.old jq \".auths += {\\\"USDREGISTRY_NAME:5000\\\": {\\\"auth\\\": \\\"USDKEY\\\",\\\"email\\\": \\\"[email protected]\\\"}}\" < /root/openshift_pull.json.old > USDPULL_SECRET mkdir -p /opt/registry/{auth,certs,data,conf} cat <<EOF > /opt/registry/conf/config.yml version: 0.1 log: fields: service: registry storage: cache: blobdescriptor: inmemory filesystem: rootdirectory: /var/lib/registry delete: enabled: true http: addr: :5000 headers: X-Content-Type-Options: [nosniff] health: storagedriver: enabled: true interval: 10s threshold: 3 compatibility: schema1: enabled: true EOF openssl req -newkey rsa:4096 -nodes -sha256 -keyout /opt/registry/certs/domain.key -x509 -days 3650 -out /opt/registry/certs/domain.crt -subj \"/C=US/ST=Madrid/L=San Bernardo/O=Karmalabs/OU=Guitar/CN=USDREGISTRY_NAME\" -addext \"subjectAltName=DNS:USDREGISTRY_NAME\" cp /opt/registry/certs/domain.crt /etc/pki/ca-trust/source/anchors/ update-ca-trust extract htpasswd -bBc /opt/registry/auth/htpasswd USDREGISTRY_USER USDREGISTRY_PASSWORD create --name registry --net host --security-opt label=disable --replace -v /opt/registry/data:/var/lib/registry:z -v /opt/registry/auth:/auth:z -v /opt/registry/conf/config.yml:/etc/docker/registry/config.yml -e \"REGISTRY_AUTH=htpasswd\" -e \"REGISTRY_AUTH_HTPASSWD_REALM=Registry\" -e \"REGISTRY_HTTP_SECRET=ALongRandomSecretForRegistry\" -e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd -v /opt/registry/certs:/certs:z -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt -e REGISTRY_HTTP_TLS_KEY=/certs/domain.key docker.io/library/registry:latest [ \"USD?\" == \"0\" ] || !! systemctl enable --now registry", "chmod u+x USD{HOME}/registry.sh", "USD{HOME}/registry.sh", "systemctl status", "systemctl start", "systemctl stop", "kcli create network -c 192.168.126.0/24 -P dhcp=false -P dns=false -d 2620:52:0:1306::0/64 --domain dns.base.domain.name --nodhcp dual", "kcli list network Listing Networks +---------+--------+---------------------+-------+------------------+------+ | Network | Type | Cidr | Dhcp | Domain | Mode | +---------+--------+---------------------+-------+------------------+------+ | default | routed | 192.168.122.0/24 | True | default | nat | | ipv4 | routed | 2620:52:0:1306::/64 | False | dns.base.domain.name | nat | | ipv4 | routed | 192.168.125.0/24 | False | dns.base.domain.name | nat | | ipv6 | routed | 2620:52:0:1305::/64 | False | dns.base.domain.name | nat | +---------+--------+---------------------+-------+------------------+------+", "kcli info network ipv6 Providing information about network ipv6 cidr: 2620:52:0:1306::/64 dhcp: false domain: dns.base.domain.name mode: nat plan: kvirt type: routed", "plan: hub-dual force: true version: stable tag: \"<4.x.y>-x86_64\" 1 cluster: \"hub-dual\" dualstack: true domain: dns.base.domain.name api_ip: 192.168.126.10 ingress_ip: 192.168.126.11 service_networks: - 172.30.0.0/16 - fd02::/112 cluster_networks: - 10.132.0.0/14 - fd01::/48 disconnected_url: registry.dns.base.domain.name:5000 disconnected_update: true disconnected_user: dummy disconnected_password: dummy disconnected_operators_version: v4.14 disconnected_operators: - name: metallb-operator - name: lvms-operator channels: - name: stable-4.14 disconnected_extra_images: - quay.io/user-name/trbsht:latest - quay.io/user-name/hypershift:BMSelfManage-v4.14-rc-v3 - registry.redhat.io/openshift4/ose-kube-rbac-proxy:v4.10 dualstack: true disk_size: 200 extra_disks: [200] memory: 48000 numcpus: 16 ctlplanes: 3 workers: 0 manifests: extra-manifests metal3: true network: dual users_dev: developer users_devpassword: developer users_admin: admin users_adminpassword: admin metallb_pool: dual-virtual-network metallb_ranges: - 192.168.126.150-192.168.126.190 metallb_autoassign: true apps: - users - lvms-operator - metallb-operator vmrules: - hub-bootstrap: nets: - name: ipv6 mac: aa:aa:aa:aa:10:07 - hub-ctlplane-0: nets: - name: ipv6 mac: aa:aa:aa:aa:10:01 - hub-ctlplane-1: nets: - name: ipv6 mac: aa:aa:aa:aa:10:02 - hub-ctlplane-2: nets: - name: ipv6 mac: aa:aa:aa:aa:10:03", "kcli create cluster openshift --pf mgmt-compact-hub-dual.yaml", "oc adm -a USD{LOCAL_SECRET_JSON} release extract --command=openshift-install \"USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}\"", "#!/bin/bash WEBSRV_FOLDER=/opt/srv ROOTFS_IMG_URL=\"USD(./openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.artifacts.metal.formats.pxe.rootfs.location')\" 1 LIVE_ISO_URL=\"USD(./openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.artifacts.metal.formats.iso.disk.location')\" 2 mkdir -p USD{WEBSRV_FOLDER}/images curl -Lk USD{ROOTFS_IMG_URL} -o USD{WEBSRV_FOLDER}/images/USD{ROOTFS_IMG_URL##*/} curl -Lk USD{LIVE_ISO_URL} -o USD{WEBSRV_FOLDER}/images/USD{LIVE_ISO_URL##*/} chmod -R 755 USD{WEBSRV_FOLDER}/* ## Run Webserver ps --noheading | grep -q websrv-ai if [[ USD? == 0 ]];then echo \"Launching Registry pod...\" /usr/bin/podman run --name websrv-ai --net host -v /opt/srv:/usr/local/apache2/htdocs:z quay.io/alosadag/httpd:p8080 fi", "apiVersion: mirror.openshift.io/v2alpha1 kind: ImageSetConfiguration mirror: platform: channels: - name: candidate-4.18 minVersion: <4.x.y-build> 1 maxVersion: <4.x.y-build> 2 type: ocp kubeVirtContainer: true 3 graph: true additionalImages: 4 - name: quay.io/karmab/origin-keepalived-ipfailover:latest - name: quay.io/karmab/kubectl:latest - name: quay.io/karmab/haproxy:latest - name: quay.io/karmab/mdns-publisher:latest - name: quay.io/karmab/origin-coredns:latest - name: quay.io/karmab/curl:latest - name: quay.io/karmab/kcli:latest - name: quay.io/user-name/trbsht:latest - name: quay.io/user-name/hypershift:BMSelfManage-v4.18 - name: registry.redhat.io/openshift4/ose-kube-rbac-proxy:v4.10 operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.18 packages: - name: lvms-operator - name: local-storage-operator - name: odf-csi-addons-operator - name: odf-operator - name: mcg-operator - name: ocs-operator - name: metallb-operator - name: kubevirt-hyperconverged 5", "oc-mirror --v2 --config imagesetconfig.yaml --workspace file://mirror-file docker://<registry>", "apiVersion: mirror.openshift.io/v2alpha1 kind: ImageSetConfiguration mirror: platform: graph: true release: registry.ci.openshift.org/ocp/release:<4.x.y-build> 1 kubeVirtContainer: true 2", "oc mirror -c imagesetconfig.yaml --workspace file://<file_path> docker://<mirror_registry_url> --v2", "oc mirror -c imagesetconfig.yaml file://<file_path> --v2", "oc mirror -c imagesetconfig.yaml --from file://<file_path> docker://<mirror_registry_url> --v2", "oc get packagemanifest", "oc apply -f oc-mirror-workspace/results-XXXXXX/imageContentSourcePolicy.yaml", "oc apply -f catalogSource-XXXXXXXX-index.yaml", "apiVersion: v1 kind: ConfigMap metadata: name: custom-registries namespace: multicluster-engine labels: app: assisted-service data: ca-bundle.crt: | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- registries.conf: | unqualified-search-registries = [\"registry.access.redhat.com\", \"docker.io\"] [[registry]] prefix = \"\" location = \"registry.redhat.io/openshift4\" mirror-by-digest-only = true [[registry.mirror]] location = \"registry.dns.base.domain.name:5000/openshift4\" 1 [[registry]] prefix = \"\" location = \"registry.redhat.io/rhacm2\" mirror-by-digest-only = true # #", "apiVersion: agent-install.openshift.io/v1beta1 kind: AgentServiceConfig metadata: annotations: unsupported.agent-install.openshift.io/assisted-service-configmap: assisted-service-config 1 name: agent namespace: multicluster-engine spec: mirrorRegistryRef: name: custom-registries 2 databaseStorage: storageClassName: lvms-vg1 accessModes: - ReadWriteOnce resources: requests: storage: 10Gi filesystemStorage: storageClassName: lvms-vg1 accessModes: - ReadWriteOnce resources: requests: storage: 20Gi osImages: 3 - cpuArchitecture: x86_64 4 openshiftVersion: \"4.14\" rootFSUrl: http://registry.dns.base.domain.name:8080/images/rhcos-414.92.202308281054-0-live-rootfs.x86_64.img 5 url: http://registry.dns.base.domain.name:8080/images/rhcos-414.92.202308281054-0-live.x86_64.iso version: 414.92.202308281054-0 - cpuArchitecture: x86_64 openshiftVersion: \"4.15\" rootFSUrl: http://registry.dns.base.domain.name:8080/images/rhcos-415.92.202403270524-0-live-rootfs.x86_64.img url: http://registry.dns.base.domain.name:8080/images/rhcos-415.92.202403270524-0-live.x86_64.iso version: 415.92.202403270524-0", "oc apply -f agentServiceConfig.yaml", "assisted-image-service-0 1/1 Running 2 11d 1 assisted-service-668b49548-9m7xw 2/2 Running 5 11d 2", "apiVersion: v1 kind: ConfigMap metadata: name: <config_map_name> 1 namespace: <config_map_namespace> 2 data: 3 <registry_name>..<port>: | 4 -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- <registry_name>..<port>: | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- <registry_name>..<port>: | -----BEGIN CERTIFICATE----- -----END CERTIFICATE-----", "spec: additionalTrustedCA: - name: registry-config", "spec: additionalTrustBundle: - name: user-ca-bundle 1", "apiVersion: v1 data: ca-bundle.crt: | // Registry1 CA -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- // Registry2 CA -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- // Registry3 CA -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- kind: ConfigMap metadata: name: user-ca-bundle namespace: <hosted_cluster_namespace> 1", "--- apiVersion: v1 kind: Namespace metadata: creationTimestamp: null name: <hosted_cluster_namespace>-<hosted_cluster_name> 1 spec: {} status: {} --- apiVersion: v1 kind: Namespace metadata: creationTimestamp: null name: <hosted_cluster_namespace> 2 spec: {} status: {}", "--- apiVersion: v1 data: ca-bundle.crt: | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- kind: ConfigMap metadata: name: user-ca-bundle namespace: <hosted_cluster_namespace> 1 --- apiVersion: v1 data: .dockerconfigjson: xxxxxxxxx kind: Secret metadata: creationTimestamp: null name: <hosted_cluster_name>-pull-secret 2 namespace: <hosted_cluster_namespace> 3 --- apiVersion: v1 kind: Secret metadata: name: sshkey-cluster-<hosted_cluster_name> 4 namespace: <hosted_cluster_namespace> 5 stringData: id_rsa.pub: ssh-rsa xxxxxxxxx --- apiVersion: v1 data: key: nTPtVBEt03owkrKhIdmSW8jrWRxU57KO/fnZa8oaG0Y= kind: Secret metadata: creationTimestamp: null name: <hosted_cluster_name>-etcd-encryption-key 6 namespace: <hosted_cluster_namespace> 7 type: Opaque", "apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: creationTimestamp: null name: capi-provider-role namespace: <hosted_cluster_namespace>-<hosted_cluster_name> 1 2 rules: - apiGroups: - agent-install.openshift.io resources: - agents verbs: - '*'", "apiVersion: hypershift.openshift.io/v1beta1 kind: HostedCluster metadata: name: <hosted_cluster_name> 1 namespace: <hosted_cluster_namespace> 2 spec: additionalTrustBundle: name: \"user-ca-bundle\" olmCatalogPlacement: guest imageContentSources: 3 - source: quay.io/openshift-release-dev/ocp-v4.0-art-dev mirrors: - registry.<dns.base.domain.name>:5000/openshift/release 4 - source: quay.io/openshift-release-dev/ocp-release mirrors: - registry.<dns.base.domain.name>:5000/openshift/release-images 5 - mirrors: autoscaling: {} controllerAvailabilityPolicy: SingleReplica dns: baseDomain: <dns.base.domain.name> 6 etcd: managed: storage: persistentVolume: size: 8Gi restoreSnapshotURL: null type: PersistentVolume managementType: Managed fips: false networking: clusterNetwork: - cidr: 10.132.0.0/14 - cidr: fd01::/48 networkType: OVNKubernetes serviceNetwork: - cidr: 172.31.0.0/16 - cidr: fd02::/112 platform: agent: agentNamespace: <hosted_cluster_namespace>-<hosted_cluster_name> 7 8 type: Agent pullSecret: name: <hosted_cluster_name>-pull-secret 9 release: image: registry.<dns.base.domain.name>:5000/openshift/release-images:<4.x.y>-x86_64 10 11 secretEncryption: aescbc: activeKey: name: <hosted_cluster_name>-etcd-encryption-key 12 type: aescbc services: - service: APIServer servicePublishingStrategy: type: LoadBalancer - service: OAuthServer servicePublishingStrategy: type: Route - service: OIDC servicePublishingStrategy: type: Route - service: Konnectivity servicePublishingStrategy: type: Route - service: Ignition servicePublishingStrategy: type: Route sshKey: name: sshkey-cluster-<hosted_cluster_name> 13 status: controlPlaneEndpoint: host: \"\" port: 0", "oc adm release info registry.<dns.base.domain.name>:5000/openshift-release-dev/ocp-release:<4.x.y>-x86_64 | grep hypershift", "hypershift sha256:31149e3e5f8c5e5b5b100ff2d89975cf5f7a73801b2c06c639bf6648766117f8", "pull registry.<dns.base.domain.name>:5000/openshift-release-dev/ocp-v4.0-art-dev@sha256:31149e3e5f8c5e5b5b100ff2d89975cf5f7a73801b2c06c639bf6648766117f8", "pull registry.dns.base.domain.name:5000/openshift/release@sha256:31149e3e5f8c5e5b5b100ff2d89975cf5f7a73801b2c06c639bf6648766117f8 Trying to pull registry.dns.base.domain.name:5000/openshift/release@sha256:31149e3e5f8c5e5b5b100ff2d89975cf5f7a73801b2c06c639bf6648766117f8 Getting image source signatures Copying blob d8190195889e skipped: already exists Copying blob c71d2589fba7 skipped: already exists Copying blob d4dc6e74b6ce skipped: already exists Copying blob 97da74cc6d8f skipped: already exists Copying blob b70007a560c9 done Copying config 3a62961e6e done Writing manifest to image destination Storing signatures 3a62961e6ed6edab46d5ec8429ff1f41d6bb68de51271f037c6cb8941a007fde", "oc apply -f 01-4.14-hosted_cluster-nodeport.yaml", "NAME READY STATUS RESTARTS AGE capi-provider-5b57dbd6d5-pxlqc 1/1 Running 0 3m57s catalog-operator-9694884dd-m7zzv 2/2 Running 0 93s cluster-api-f98b9467c-9hfrq 1/1 Running 0 3m57s cluster-autoscaler-d7f95dd5-d8m5d 1/1 Running 0 93s cluster-image-registry-operator-5ff5944b4b-648ht 1/2 Running 0 93s cluster-network-operator-77b896ddc-wpkq8 1/1 Running 0 94s cluster-node-tuning-operator-84956cd484-4hfgf 1/1 Running 0 94s cluster-policy-controller-5fd8595d97-rhbwf 1/1 Running 0 95s cluster-storage-operator-54dcf584b5-xrnts 1/1 Running 0 93s cluster-version-operator-9c554b999-l22s7 1/1 Running 0 95s control-plane-operator-6fdc9c569-t7hr4 1/1 Running 0 3m57s csi-snapshot-controller-785c6dc77c-8ljmr 1/1 Running 0 77s csi-snapshot-controller-operator-7c6674bc5b-d9dtp 1/1 Running 0 93s csi-snapshot-webhook-5b8584875f-2492j 1/1 Running 0 77s dns-operator-6874b577f-9tc6b 1/1 Running 0 94s etcd-0 3/3 Running 0 3m39s hosted-cluster-config-operator-f5cf5c464-4nmbh 1/1 Running 0 93s ignition-server-6b689748fc-zdqzk 1/1 Running 0 95s ignition-server-proxy-54d4bb9b9b-6zkg7 1/1 Running 0 95s ingress-operator-6548dc758b-f9gtg 1/2 Running 0 94s konnectivity-agent-7767cdc6f5-tw782 1/1 Running 0 95s kube-apiserver-7b5799b6c8-9f5bp 4/4 Running 0 3m7s kube-controller-manager-5465bc4dd6-zpdlk 1/1 Running 0 44s kube-scheduler-5dd5f78b94-bbbck 1/1 Running 0 2m36s machine-approver-846c69f56-jxvfr 1/1 Running 0 92s oauth-openshift-79c7bf44bf-j975g 2/2 Running 0 62s olm-operator-767f9584c-4lcl2 2/2 Running 0 93s openshift-apiserver-5d469778c6-pl8tj 3/3 Running 0 2m36s openshift-controller-manager-6475fdff58-hl4f7 1/1 Running 0 95s openshift-oauth-apiserver-dbbc5cc5f-98574 2/2 Running 0 95s openshift-route-controller-manager-5f6997b48f-s9vdc 1/1 Running 0 95s packageserver-67c87d4d4f-kl7qh 2/2 Running 0 93s", "NAMESPACE NAME VERSION KUBECONFIG PROGRESS AVAILABLE PROGRESSING MESSAGE clusters hosted-dual hosted-admin-kubeconfig Partial True False The hosted control plane is available", "apiVersion: hypershift.openshift.io/v1beta1 kind: NodePool metadata: creationTimestamp: null name: <hosted_cluster_name> \\ 1 namespace: <hosted_cluster_namespace> \\ 2 spec: arch: amd64 clusterName: <hosted_cluster_name> management: autoRepair: false \\ 3 upgradeType: InPlace \\ 4 nodeDrainTimeout: 0s platform: type: Agent release: image: registry.<dns.base.domain.name>:5000/openshift/release-images:4.x.y-x86_64 \\ 5 replicas: 2 6 status: replicas: 2", "oc apply -f 02-nodepool.yaml", "NAMESPACE NAME CLUSTER DESIRED NODES CURRENT NODES AUTOSCALING AUTOREPAIR VERSION UPDATINGVERSION UPDATINGCONFIG MESSAGE clusters hosted-dual hosted 0 False False 4.x.y-x86_64", "apiVersion: agent-install.openshift.io/v1beta1 kind: InfraEnv metadata: name: <hosted_cluster_name> namespace: <hosted-cluster-namespace>-<hosted_cluster_name> 1 2 spec: pullSecretRef: 3 name: pull-secret sshAuthorizedKey: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDk7ICaUE+/k4zTpxLk4+xFdHi4ZuDi5qjeF52afsNkw0w/glILHhwpL5gnp5WkRuL8GwJuZ1VqLC9EKrdmegn4MrmUlq7WTsP0VFOZFBfq2XRUxo1wrRdor2z0Bbh93ytR+ZsDbbLlGngXaMa0Vbt+z74FqlcajbHTZ6zBmTpBVq5RHtDPgKITdpE1fongp7+ZXQNBlkaavaqv8bnyrP4BWahLP4iO9/xJF9lQYboYwEEDzmnKLMW1VtCE6nJzEgWCufACTbxpNS7GvKtoHT/OVzw8ArEXhZXQUS1UY8zKsX2iXwmyhw5Sj6YboA8WICs4z+TrFP89LmxXY0j6536TQFyRz1iB4WWvCbH5n6W+ABV2e8ssJB1AmEy8QYNwpJQJNpSxzoKBjI73XxvPYYC/IjPFMySwZqrSZCkJYqQ023ySkaQxWZT7in4KeMu7eS2tC+Kn4deJ7KwwUycx8n6RHMeD8Qg9flTHCv3gmab8JKZJqN3hW1D378JuvmIX4V0= 4", "oc apply -f 03-infraenv.yaml", "NAMESPACE NAME ISO CREATED AT clusters-hosted-dual hosted 2023-09-11T15:14:10Z", "kcli delete plan <hosted_cluster_name> 1", "kcli create vm -P start=False \\ 1 -P uefi_legacy=true \\ 2 -P plan=<hosted_cluster_name> \\ 3 -P memory=8192 -P numcpus=16 \\ 4 -P disks=[200,200] \\ 5 -P nets=[\"{\\\"name\\\": \\\"<network>\\\", \\\"mac\\\": \\\"aa:aa:aa:aa:11:01\\\"}\"] \\ 6 -P uuid=aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa1101 -P name=<hosted_cluster_name>-worker0 7", "kcli create vm -P start=False \\ 1 -P uefi_legacy=true \\ 2 -P plan=<hosted_cluster_name> \\ 3 -P memory=8192 -P numcpus=16 \\ 4 -P disks=[200,200] \\ 5 -P nets=[\"{\\\"name\\\": \\\"<network>\\\", \\\"mac\\\": \\\"aa:aa:aa:aa:11:02\\\"}\"] \\ 6 -P uuid=aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa1102 -P name=<hosted_cluster_name>-worker1 7", "kcli create vm -P start=False \\ 1 -P uefi_legacy=true \\ 2 -P plan=<hosted_cluster_name> \\ 3 -P memory=8192 -P numcpus=16 \\ 4 -P disks=[200,200] \\ 5 -P nets=[\"{\\\"name\\\": \\\"<network>\\\", \\\"mac\\\": \\\"aa:aa:aa:aa:11:03\\\"}\"] \\ 6 -P uuid=aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa1103 -P name=<hosted_cluster_name>-worker2 7", "systemctl restart ksushy", "+---------------------+--------+-------------------+----------------------------------------------------+-------------+---------+ | Name | Status | Ip | Source | Plan | Profile | +---------------------+--------+-------------------+----------------------------------------------------+-------------+---------+ | hosted-worker0 | down | | | hosted-dual | kvirt | | hosted-worker1 | down | | | hosted-dual | kvirt | | hosted-worker2 | down | | | hosted-dual | kvirt | +---------------------+--------+-------------------+----------------------------------------------------+-------------+---------+", "apiVersion: v1 kind: Secret metadata: name: <hosted_cluster_name>-worker0-bmc-secret \\ 1 namespace: <hosted_cluster_namespace>-<hosted_cluster_name> \\ 2 data: password: YWRtaW4= \\ 3 username: YWRtaW4= \\ 4 type: Opaque apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: <hosted_cluster_name>-worker0 namespace: <hosted_cluster_namespace>-<hosted_cluster_name> \\ 5 labels: infraenvs.agent-install.openshift.io: <hosted_cluster_name> \\ 6 annotations: inspect.metal3.io: disabled bmac.agent-install.openshift.io/hostname: <hosted_cluster_name>-worker0 \\ 7 spec: automatedCleaningMode: disabled \\ 8 bmc: disableCertificateVerification: true \\ 9 address: redfish-virtualmedia://[192.168.126.1]:9000/redfish/v1/Systems/local/<hosted_cluster_name>-worker0 \\ 10 credentialsName: <hosted_cluster_name>-worker0-bmc-secret \\ 11 bootMACAddress: aa:aa:aa:aa:02:11 \\ 12 online: true 13", "oc apply -f 04-bmh.yaml", "NAMESPACE NAME STATE CONSUMER ONLINE ERROR AGE clusters-hosted hosted-worker0 registering true 2s clusters-hosted hosted-worker1 registering true 2s clusters-hosted hosted-worker2 registering true 2s", "NAMESPACE NAME STATE CONSUMER ONLINE ERROR AGE clusters-hosted hosted-worker0 provisioning true 16s clusters-hosted hosted-worker1 provisioning true 16s clusters-hosted hosted-worker2 provisioning true 16s", "NAMESPACE NAME STATE CONSUMER ONLINE ERROR AGE clusters-hosted hosted-worker0 provisioned true 67s clusters-hosted hosted-worker1 provisioned true 67s clusters-hosted hosted-worker2 provisioned true 67s", "NAMESPACE NAME CLUSTER APPROVED ROLE STAGE clusters-hosted aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa0411 true auto-assign clusters-hosted aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa0412 true auto-assign clusters-hosted aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa0413 true auto-assign", "oc -n <hosted_cluster_namespace> scale nodepool <hosted_cluster_name> --replicas 3", "NAMESPACE NAME CLUSTER APPROVED ROLE STAGE clusters-hosted aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa0411 hosted true auto-assign clusters-hosted aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa0412 hosted true auto-assign clusters-hosted aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa0413 hosted true auto-assign", "NAMESPACE NAME CLUSTER DESIRED NODES CURRENT NODES AUTOSCALING AUTOREPAIR VERSION UPDATINGVERSION UPDATINGCONFIG MESSAGE clusters hosted hosted 3 False False <4.x.y>-x86_64 Minimum availability requires 3 replicas, current 0 available", "oc apply -f registry-config.yaml", "apiVersion: v1 kind: ConfigMap metadata: name: registry-config namespace: openshift-config data: <mirror_registry>: | -----BEGIN CERTIFICATE----- -----END CERTIFICATE-----", "spec: additionalTrustedCA: - name: registry-config", "oc get secret/pull-secret -n openshift-config -o json | jq -r '.data.\".dockerconfigjson\"' | base64 -d > authfile", "\"auths\": { \"<mirror_registry>\": { 1 \"auth\": \"<credentials>\", 2 \"email\": \"[email protected]\" } },", "oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=authfile", "apiVersion: v1 kind: ConfigMap metadata: name: mirror-config namespace: multicluster-engine labels: app: assisted-service data: ca-bundle.crt: | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- registries.conf: | [[registry]] location = \"registry.stage.redhat.io\" insecure = false blocked = false mirror-by-digest-only = true prefix = \"\" [[registry.mirror]] location = \"<mirror_registry>\" insecure = false [[registry]] location = \"registry.redhat.io/multicluster-engine\" insecure = false blocked = false mirror-by-digest-only = true prefix = \"\" [[registry.mirror]] location = \"<mirror_registry>/multicluster-engine\" 1 insecure = false", "spec: mirrorRegistryRef: name: mirror-config", "cat icsp.yaml - mirrors: - <mirror_registry>/openshift/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev - mirrors: - <mirror_registry>/openshift/release-images source: quay.io/openshift-release-dev/ocp-release", "hcp create cluster agent --name=<hosted_cluster_name> \\ 1 --pull-secret=<path_to_pull_secret> \\ 2 --agent-namespace=<hosted_control_plane_namespace> \\ 3 --base-domain=<basedomain> \\ 4 --api-server-address=api.<hosted_cluster_name>.<basedomain> --etcd-storage-class=<etcd_storage_class> \\ 5 --ssh-key <path_to_ssh_public_key> \\ 6 --namespace <hosted_cluster_namespace> \\ 7 --control-plane-availability-policy SingleReplica --release-image=quay.io/openshift-release-dev/ocp-release:<ocp_release_image> \\ 8 --additional-trust-bundle <path for cert> \\ 9 --image-content-sources icsp.yaml", "oc get events -n hypershift", "LAST SEEN TYPE REASON OBJECT MESSAGE 4m46s Warning ReconcileError deployment/operator Failed to ensure UWM telemetry remote write: cannot get telemeter client secret: Secret \"telemeter-client\" not found", "kind: ConfigMap apiVersion: v1 metadata: name: hypershift-operator-install-flags namespace: local-cluster data: installFlagsToAdd: \"\" installFlagsToRemove: \"--enable-uwm-telemetry-remote-write\"", "oc apply -f <filename>.yaml", "oc patch mce <multiclusterengine> --type=merge -p '{\"spec\":{\"overrides\":{\"components\":[{\"name\":\"hypershift\",\"enabled\": true}]}}}'", "oc get managedclusteraddons -n local-cluster hypershift-addon", "NAME AVAILABLE DEGRADED PROGRESSING hypershift-addon True False", "oc wait --for=condition=Degraded=True managedclusteraddons/hypershift-addon -n local-cluster --timeout=5m", "oc wait --for=condition=Available=True managedclusteraddons/hypershift-addon -n local-cluster --timeout=5m", "oc edit addondeploymentconfig hypershift-addon-deploy-config -n multicluster-engine", "apiVersion: addon.open-cluster-management.io/v1alpha1 kind: AddOnDeploymentConfig metadata: name: hypershift-addon-deploy-config namespace: multicluster-engine spec: nodePlacement: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra operator: Exists" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/hosted_control_planes/deploying-hosted-control-planes-in-a-disconnected-environment
Cluster administration
Cluster administration OpenShift Dedicated 4 Configuring OpenShift Dedicated clusters Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/cluster_administration/index
Chapter 2. Installing the Virtualization Packages
Chapter 2. Installing the Virtualization Packages To use virtualization, Red Hat virtualization packages must be installed on your computer. Virtualization packages can be installed when installing Red Hat Enterprise Linux or after installation using the yum command and the Subscription Manager application. The KVM hypervisor uses the default Red Hat Enterprise Linux kernel with the kvm kernel module. 2.1. Installing Virtualization Packages During a Red Hat Enterprise Linux Installation This section provides information about installing virtualization packages while installing Red Hat Enterprise Linux. Note For detailed information about installing Red Hat Enterprise Linux, see the Red Hat Enterprise Linux 7 Installation Guide . Important The Anaconda interface only offers the option to install Red Hat virtualization packages during the installation of Red Hat Enterprise Linux Server. When installing a Red Hat Enterprise Linux Workstation, the Red Hat virtualization packages can only be installed after the workstation installation is complete. See Section 2.2, "Installing Virtualization Packages on an Existing Red Hat Enterprise Linux System" Procedure 2.1. Installing virtualization packages Select software Follow the installation procedure until the Installation Summary screen. Figure 2.1. The Installation Summary screen In the Installation Summary screen, click Software Selection . The Software Selection screen opens. Select the server type and package groups You can install Red Hat Enterprise Linux 7 with only the basic virtualization packages or with packages that allow management of guests through a graphical user interface. Do one of the following: Install a minimal virtualization host Select the Virtualization Host radio button in the Base Environment pane and the Virtualization Platform check box in the Add-Ons for Selected Environment pane. This installs a basic virtualization environment which can be run with virsh or remotely over the network. Figure 2.2. Virtualization Host selected in the Software Selection screen Install a virtualization host with a graphical user interface Select the Server with GUI radio button in the Base Environment pane and the Virtualization Client , Virtualization Hypervisor , and Virtualization Tools check boxes in the Add-Ons for Selected Environment pane. This installs a virtualization environment along with graphical tools for installing and managing guest virtual machines. Figure 2.3. Server with GUI selected in the software selection screen Finalize installation Click Done and continue with the installation. Important You need a valid Red Hat Enterprise Linux subscription to receive updates for the virtualization packages. 2.1.1. Installing KVM Packages with Kickstart Files To use a Kickstart file to install Red Hat Enterprise Linux with the virtualization packages, append the following package groups in the %packages section of your Kickstart file: For more information about installing with Kickstart files, see the Red Hat Enterprise Linux 7 Installation Guide .
[ "@virtualization-hypervisor @virtualization-client @virtualization-platform @virtualization-tools" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/chap-Installing_the_virtualization_packages
Chapter 3. Scaling storage capacity of AWS OpenShift Data Foundation cluster
Chapter 3. Scaling storage capacity of AWS OpenShift Data Foundation cluster To scale the storage capacity of your configured Red Hat OpenShift Data Foundation worker nodes on AWS cluster, you can increase the capacity by adding three disks at a time. Three disks are needed since OpenShift Data Foundation uses a replica count of 3 to maintain the high availability. So the amount of storage consumed is three times the usable space. Note Usable space might vary when encryption is enabled or replica 2 pools are being used. 3.1. Scaling up storage capacity on a cluster To increase the storage capacity in a dynamically created storage cluster on an user-provisioned infrastructure, you can add storage capacity and performance to your configured Red Hat OpenShift Data Foundation worker nodes. Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. The disk should be of the same size and type as used during initial deployment. Procedure Log in to the OpenShift Web Console. Click Operators Installed Operators . Click OpenShift Data Foundation Operator. Click the Storage Systems tab. Click the Action Menu (...) on the far right of the storage system name to extend the options menu. Select Add Capacity from the options menu. Select the Storage Class . Choose the storage class which you wish to use to provision new storage devices. Click Add . To check the status, navigate to Storage Data Foundation and verify that the Storage System in the Status card has a green tick. Verification steps Verify the Raw Capacity card. In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Block and File tab, check the Raw Capacity card. Note that the capacity increases based on your selections. Note The raw capacity does not take replication into account and shows the full capacity. Verify that the new OSDs and their corresponding new Persistent Volume Claims (PVCs) are created. To view the state of the newly created OSDs: Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. To view the state of the PVCs: Click Storage Persistent Volume Claims from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. Identify the nodes where the new OSD pods are running. <OSD-pod-name> Is the name of the OSD pod. For example: Example output: For each of the nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the selected hosts. <node-name> Is the name of the node. Check for the crypt keyword beside the ocs-deviceset names. Important Cluster reduction is supported only with the Red Hat Support Team's assistance. 3.2. Scaling out storage capacity on a AWS cluster OpenShift Data Foundation is highly scalable. It can be scaled out by adding new nodes with required storage and enough hardware resources in terms of CPU and RAM. Practically there is no limit on the number of nodes which can be added but from the support perspective 2000 nodes is the limit for OpenShift Data Foundation. Scaling out storage capacity can be broken down into two steps Adding new node Scaling up the storage capacity Note OpenShift Data Foundation does not support heterogeneous OSD/Disk sizes. 3.2.1. Adding a node You can add nodes to increase the storage capacity when existing worker nodes are already running at their maximum supported OSDs or there are not enough resources to add new OSDs on the existing nodes. It is always recommended to add nodes in the multiple of three, each of them in different failure domains. While we recommend adding nodes in the multiple of three, you still get the flexibility of adding one node at a time in the flexible scaling deployment. Refer to the Knowledgebase article Verify if flexible scaling is enabled . Note OpenShift Data Foundation does not support heterogeneous disk size and types. The new nodes to be added should have the disk of the same type and size which was used during OpenShift Data Foundation deployment. 3.2.1.1. Adding a node to an installer-provisioned infrastructure Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. Procedure Navigate to Compute Machine Sets . On the machine set where you want to add nodes, select Edit Machine Count . Add the amount of nodes, and click Save . Click Compute Nodes and confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node. For the new node, click Action menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . Note It is recommended to add 3 nodes, one each in different zones. You must add 3 nodes and perform this procedure for all of them. In case of bare metal installer-provisioned infrastructure deployment, you must expand the cluster first. For instructions, see Expanding the cluster . Verification steps Execute the following command the terminal and verify that the new node is present in the output: On the OpenShift web console, click Workloads Pods , confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* 3.2.1.2. Adding a node to an user-provisioned infrastructure Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. Procedure Depending on the type of infrastructure, perform the following steps: Get a new machine with the required infrastructure. See Platform requirements . Create a new OpenShift Container Platform worker node using the new machine. Check for certificate signing requests (CSRs) that are in Pending state. Approve all the required CSRs for the new node. <Certificate_Name> Is the name of the CSR. Click Compute Nodes , confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From User interface For the new node, click Action Menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From Command line interface Apply the OpenShift Data Foundation label to the new node. <new_node_name> Is the name of the new node. Verification steps Execute the following command the terminal and verify that the new node is present in the output: On the OpenShift web console, click Workloads Pods , confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* 3.2.2. Scaling up storage capacity To scale up storage capacity, see Scaling up a cluster .
[ "oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/ <OSD-pod-name>", "oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqm", "NODE compute-1", "oc debug node/ <node-name>", "chroot /host", "lsblk", "oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1", "oc get csr", "oc adm certificate approve <Certificate_Name>", "oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"", "oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/scaling_storage/scaling_storage_capacity_of_aws_openshift_data_foundation_cluster
3.4. Extension Metadata
3.4. Extension Metadata You can use the extension metadata property data-ttl as a model property or on a source table to indicate a default TTL. A negative value means no TTL, 0 means do not cache, and a positive number indicates the time to live in milliseconds. If no TTL is specified on the table, then the schema will be checked. The TTL for the cache entry will be taken as the least positive value among all TTLs. Thus setting this value as a model property can quickly disable any caching against a particular source. Here is an example that shows you how to set the property in the vdb.xml :
[ "<vdb name=\"vdbname\" version=\"1\"> <model name=\"Customers\"> <property name=\"teiid_rel:data-ttl\" value=\"0\"/> ...</para>" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_5_caching_guide/extension_metadata
Appendix C. Configuring a Jenkins freestyle project to deploy your Node.js application with nodeshift
Appendix C. Configuring a Jenkins freestyle project to deploy your Node.js application with nodeshift Similar to using nodeshift from your local host to deploy a Node.js application, you can configure Jenkins to use nodeshift to deploy a Node.js application. Prerequisites Access to an OpenShift cluster. The Jenkins container image running on same OpenShift cluster. The Node.js plugin installed on your Jenkins server. A Node.js application configured to use nodeshift and the Red Hat base image. Example using the Red Hat base image with nodeshift USD nodeshift --dockerImage=registry.access.redhat.com/rhscl/ubi8/nodejs-12 ... The source of the application available in GitHub. Procedure Create a new OpenShift project for your application: Open the OpenShift Web console and log in. Click Create Project to create a new OpenShift project. Enter the project information and click Create . Ensure Jenkins has access to that project. For example, if you configured a service account for Jenkins, ensure that account has edit access to the project of your application. Create a new freestyle Jenkins project on your Jenkins server: Click New Item . Enter a name, choose Freestyle project , and click OK . Under Source Code Management , choose Git and add the GitHub url of your application. Under Build Environment , make sure Provide Node & npm bin/ folder to PATH is checked and the Node.js environment is configured. Under Build , choose Add build step and select Execute Shell . Add the following to the Command area: npm install -g nodeshift nodeshift --dockerImage=registry.access.redhat.com/rhscl/ubi8/nodejs-12 --namespace=MY_PROJECT Substitute MY_PROJECT with the name of the OpenShift project for your application. Click Save . Click Build Now from the main page of the Jenkins project to verify your application builds and deploys to the OpenShift project for your application. You can also verify that your application is deployed by opening the route in the OpenShift project of the application. steps Consider adding GITSCM polling or using the Poll SCM build trigger . These options enable builds to run every time a new commit is pushed to the GitHub repository. Consider adding nodeshift as a global package when configuring the Node.js plugin . This allows you to omit npm install -g nodeshift when adding your Execute Shell build step. Consider adding a build step that executes tests before deploying.
[ "nodeshift --dockerImage=registry.access.redhat.com/rhscl/ubi8/nodejs-12", "npm install -g nodeshift nodeshift --dockerImage=registry.access.redhat.com/rhscl/ubi8/nodejs-12 --namespace=MY_PROJECT" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_node.js/22/html/node.js_runtime_guide/configuring-a-jenkins-freestyle-project-to-deploy-your-node-application-with-nodeshift_nodejs
Scaling storage
Scaling storage Red Hat OpenShift Data Foundation 4.17 Instructions for scaling operations in OpenShift Data Foundation Red Hat Storage Documentation Team Abstract This document explains scaling options for Red Hat OpenShift Data Foundation. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug . Chapter 1. Introduction to scaling storage Red Hat OpenShift Data Foundation is a highly scalable storage system. OpenShift Data Foundation allows you to scale by adding the disks in the multiple of three, or three or any number depending upon the deployment type. For internal (dynamic provisioning) deployment mode, you can increase the capacity by adding 3 disks at a time. For internal-attached (Local Storage Operator based) mode, you can deploy with less than 3 failure domains. With flexible scale deployment enabled, you can scale up by adding any number of disks. For deployment with 3 failure domains, you will be able to scale up by adding disks in the multiple of 3. For scaling your storage in external mode, see Red Hat Ceph Storage documentation . Note You can use a maximum of nine storage devices per node. The high number of storage devices will lead to a higher recovery time during the loss of a node. This recommendation ensures that nodes stay below the cloud provider dynamic storage device attachment limits, and limits the recovery time after node failure with local storage devices. While scaling, you must ensure that there are enough CPU and Memory resources as per scaling requirement. Supported storage classes by default gp2-csi on AWS thin on VMware managed_premium on Microsoft Azure 1.1. Supported Deployments for Red Hat OpenShift Data Foundation User-provisioned infrastructure: Amazon Web Services (AWS) VMware Bare metal IBM Power IBM Z or IBM(R) LinuxONE Installer-provisioned infrastructure: Amazon Web Services (AWS) Microsoft Azure VMware Bare metal Chapter 2. Requirements for scaling storage Before you proceed to scale the storage nodes, refer to the following sections to understand the node requirements for your specific Red Hat OpenShift Data Foundation instance: Platform requirements Resource requirements Storage device requirements Dynamic storage devices Local storage devices Capacity planning Important Always ensure that you have plenty of storage capacity. If storage ever fills completely, it is not possible to add capacity or delete or migrate content away from the storage to free up space completely. Full storage is very difficult to recover. Capacity alerts are issued when cluster storage capacity reaches 75% (near-full) and 85% (full) of total capacity. Always address capacity warnings promptly, and review your storage regularly to ensure that you do not run out of storage space. If you do run out of storage space completely, contact Red Hat Customer Support . Chapter 3. Scaling storage capacity of AWS OpenShift Data Foundation cluster To scale the storage capacity of your configured Red Hat OpenShift Data Foundation worker nodes on AWS cluster, you can increase the capacity by adding three disks at a time. Three disks are needed since OpenShift Data Foundation uses a replica count of 3 to maintain the high availability. So the amount of storage consumed is three times the usable space. Note Usable space might vary when encryption is enabled or replica 2 pools are being used. 3.1. Scaling up storage capacity on a cluster To increase the storage capacity in a dynamically created storage cluster on an user-provisioned infrastructure, you can add storage capacity and performance to your configured Red Hat OpenShift Data Foundation worker nodes. Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. The disk should be of the same size and type as used during initial deployment. Procedure Log in to the OpenShift Web Console. Click Operators -> Installed Operators . Click OpenShift Data Foundation Operator. Click the Storage Systems tab. Click the Action Menu (...) on the far right of the storage system name to extend the options menu. Select Add Capacity from the options menu. Select the Storage Class . Choose the storage class which you wish to use to provision new storage devices. Click Add . To check the status, navigate to Storage -> Data Foundation and verify that the Storage System in the Status card has a green tick. Verification steps Verify the Raw Capacity card. In the OpenShift Web Console, click Storage -> Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Block and File tab, check the Raw Capacity card. Note that the capacity increases based on your selections. Note The raw capacity does not take replication into account and shows the full capacity. Verify that the new object storage devices (OSDs) and their corresponding new Persistent Volume Claims (PVCs) are created. To view the state of the newly created OSDs: Click Workloads -> Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. To view the state of the PVCs: Click Storage -> Persistent Volume Claims from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. Identify the nodes where the new OSD pods are running. <OSD-pod-name> Is the name of the OSD pod. For example: Example output: For each of the nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the selected hosts. <node-name> Is the name of the node. Check for the crypt keyword beside the ocs-deviceset names. Important Cluster reduction is supported only with the Red Hat Support Team's assistance. 3.2. Scaling out storage capacity on a AWS cluster OpenShift Data Foundation is highly scalable. It can be scaled out by adding new nodes with required storage and enough hardware resources in terms of CPU and RAM. Practically there is no limit on the number of nodes which can be added but from the support perspective 2000 nodes is the limit for OpenShift Data Foundation. Scaling out storage capacity can be broken down into two steps Adding new node Scaling up the storage capacity Note OpenShift Data Foundation does not support heterogeneous OSD/Disk sizes. 3.2.1. Adding a node You can add nodes to increase the storage capacity when existing worker nodes are already running at their maximum supported OSDs or there are not enough resources to add new OSDs on the existing nodes. It is always recommended to add nodes in the multiple of three, each of them in different failure domains. While it is recommended to add nodes in the multiple of three, you still have the flexibility to add one node at a time in the flexible scaling deployment. Refer to the Knowledgebase article Verify if flexible scaling is enabled . Note OpenShift Data Foundation does not support heterogeneous disk size and types. The new nodes to be added should have the disk of the same type and size which was used during OpenShift Data Foundation deployment. 3.2.1.1. Adding a node to an installer-provisioned infrastructure Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. Procedure Navigate to Compute -> Machine Sets . On the machine set where you want to add nodes, select Edit Machine Count . Add the amount of nodes, and click Save . Click Compute -> Nodes and confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node. For the new node, click Action menu (...) -> Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . Note It is recommended to add 3 nodes, one each in different zones. You must add 3 nodes and perform this procedure for all of them. In case of bare metal installer-provisioned infrastructure deployment, you must expand the cluster first. For instructions, see Expanding the cluster . Verification steps Execute the following command in the terminal and verify that the new node is present in the output: On the OpenShift web console, click Workloads -> Pods , confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* 3.2.1.2. Adding a node to an user-provisioned infrastructure Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. Procedure Depending on the type of infrastructure, perform the following steps: Get a new machine with the required infrastructure. See Platform requirements . Create a new OpenShift Container Platform worker node using the new machine. Check for certificate signing requests (CSRs) that are in Pending state. Approve all the required CSRs for the new node. <Certificate_Name> Is the name of the CSR. Click Compute -> Nodes , confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From User interface For the new node, click Action Menu (...) -> Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From Command line interface Apply the OpenShift Data Foundation label to the new node. <new_node_name> Is the name of the new node. Verification steps Execute the following command in the terminal and verify that the new node is present in the output: On the OpenShift web console, click Workloads -> Pods , confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* 3.2.2. Scaling up storage capacity To scale up storage capacity, see Scaling up storage capacity on a cluster . Chapter 4. Scaling storage of bare metal OpenShift Data Foundation cluster To scale the storage capacity of your configured Red Hat OpenShift Data Foundation worker nodes on your bare metal cluster, you can increase the capacity by adding three disks at a time. Three disks are needed since OpenShift Data Foundation uses a replica count of 3 to maintain the high availability. So the amount of storage consumed is three times the usable space. Note Usable space might vary when encryption is enabled or replica 2 pools are being used. 4.1. Scaling up a cluster created using local storage devices To scale up an OpenShift Data Foundation cluster which was created using local storage devices, you need to add a new disk to the storage node. The new disks size must be of the same size as the disks used during the deployment because OpenShift Data Foundation does not support heterogeneous disks/OSDs. For deployments having three failure domains, you can scale up the storage by adding disks in the multiples of three, with the same number of disks coming from nodes in each of the failure domains. For example, if we scale by adding six disks, two disks are taken from nodes in each of the three failure domains. If the number of disks is not in multiples of three, it will only consume the disk to the maximum in the multiple of three while the remaining disks remain unused. For deployments having less than three failure domains, there is a flexibility to add any number of disks. Make sure to verify that flexible scaling is enabled. For information, refer to the Knowledgebase article Verify if flexible scaling is enabled . Note Flexible scaling features get enabled at the time of deployment and cannot be enabled or disabled later on. Prerequisites Administrative privilege to the OpenShift Container Platform Console. A running OpenShift Data Foundation Storage Cluster. Make sure that the disks to be used for scaling are attached to the storage node Make sure that LocalVolumeDiscovery and LocalVolumeSet objects are created. Procedure To add capacity, you can either use a storage class that you provisioned during the deployment or any other storage class that matches the filter. In the OpenShift Web Console, click Operators -> Installed Operators . Click OpenShift Data Foundation Operator. Click the Storage Systems tab. Click the Action menu (...) to the visible list to extend the options menu. Select Add Capacity from the options menu. Select the Storage Class for which you added disks or the new storage class depending on your requirement. Available Capacity displayed is based on the local disks available in storage class. Click Add . To check the status, navigate to Storage -> Data Foundation and verify that the Storage System in the Status card has a green tick. Verification steps Verify the Raw Capacity card. In the OpenShift Web Console, click Storage -> Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Block and File tab, check the Raw Capacity card. Note that the capacity increases based on your selections. Note The raw capacity does not take replication into account and shows the full capacity. Verify that the new OSDs and their corresponding new Persistent Volume Claims (PVCs) are created. To view the state of the newly created OSDs: Click Workloads -> Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. To view the state of the PVCs: Click Storage -> Persistent Volume Claims from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. Identify the nodes where the new OSD pods are running. <OSD-pod-name> Is the name of the OSD pod. For example: Example output: For each of the nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the selected host(s). <node-name> Is the name of the node. Check for the crypt keyword beside the ocs-deviceset names. Important Cluster reduction is supported only with the Red Hat Support Team's assistance. 4.2. Scaling out storage capacity on a bare metal cluster OpenShift Data Foundation is highly scalable. It can be scaled out by adding new nodes with required storage and enough hardware resources in terms of CPU and RAM. There is no limit on the number of nodes which can be added. Howerver, from the technical support perspective, 2000 nodes is the limit for OpenShift Data Foundation. Scaling out storage capacity can be broken down into two steps Adding new node Scaling up the storage capacity Note OpenShift Data Foundation does not support heterogeneous OSD/Disk sizes. 4.2.1. Adding a node You can add nodes to increase the storage capacity when existing worker nodes are already running at their maximum supported OSDs or there are not enough resources to add new OSDs on the existing nodes. It is always recommended to add nodes in the multiple of three, each of them in different failure domains. While it is recommended to add nodes in the multiple of three, you still have the flexibility to add one node at a time in the flexible scaling deployment. Refer to the Knowledgebase article Verify if flexible scaling is enabled . Note OpenShift Data Foundation does not support heterogeneous disk size and types. The new nodes to be added should have the disk of the same type and size which was used during OpenShift Data Foundation deployment. 4.2.1.1. Adding a node to an installer-provisioned infrastructure Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. Procedure Navigate to Compute -> Machine Sets . On the machine set where you want to add nodes, select Edit Machine Count . Add the amount of nodes, and click Save . Click Compute -> Nodes and confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node. For the new node, click Action menu (...) -> Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . Note It is recommended to add 3 nodes, one each in different zones. You must add 3 nodes and perform this procedure for all of them. In case of bare metal installer-provisioned infrastructure deployment, you must expand the cluster first. For instructions, see Expanding the cluster . Verification steps Execute the following command in the terminal and verify that the new node is present in the output: On the OpenShift web console, click Workloads -> Pods , confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* 4.2.1.2. Adding a node using a local storage device You can add nodes to increase the storage capacity when existing worker nodes are already running at their maximum supported OSDs or when there are not enough resources to add new OSDs on the existing nodes. Add nodes in the multiple of 3, each of them in different failure domains. Though it is recommended to add nodes in multiples of 3 nodes, you have the flexibility to add one node at a time in flexible scaling deployment. See Knowledgebase article Verify if flexible scaling is enabled Note OpenShift Data Foundation does not support heterogeneous disk size and types. The new nodes to be added should have the disk of the same type and size which was used during initial OpenShift Data Foundation deployment. Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. Procedure Depending on the type of infrastructure, perform the following steps: Get a new machine with the required infrastructure. See Platform requirements . Create a new OpenShift Container Platform worker node using the new machine. Check for certificate signing requests (CSRs) that are in Pending state. Approve all the required CSRs for the new node. <Certificate_Name> Is the name of the CSR. Click Compute -> Nodes , confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From User interface For the new node, click Action Menu (...) -> Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From Command line interface Apply the OpenShift Data Foundation label to the new node. <new_node_name> Is the name of the new node. Click Operators -> Installed Operators from the OpenShift Web Console. From the Project drop-down list, make sure to select the project where the Local Storage Operator is installed. Click Local Storage . Click the Local Volume Discovery tab. Beside the LocalVolumeDiscovery , click Action menu (...) -> Edit Local Volume Discovery . In the YAML, add the hostname of the new node in the values field under the node selector. Click Save . Click the Local Volume Sets tab. Beside the LocalVolumeSet , click Action menu (...) -> Edit Local Volume Set . In the YAML, add the hostname of the new node in the values field under the node selector . Click Save . Note It is recommended to add 3 nodes, one each in different zones. You must add 3 nodes and perform this procedure for all of them. Verification steps Execute the following command in the terminal and verify that the new node is present in the output: On the OpenShift web console, click Workloads -> Pods , confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* 4.2.2. Scaling up storage capacity To scale up storage capacity, see Scaling up storage by adding capacity . Chapter 5. Scaling storage of VMware OpenShift Data Foundation cluster 5.1. Scaling up storage on a VMware cluster To increase the storage capacity in a dynamically created storage cluster on a VMware user-provisioned infrastructure, you can add storage capacity and performance to your configured Red Hat OpenShift Data Foundation worker nodes. Prerequisites Administrative privilege to the OpenShift Container Platform Console. A running OpenShift Data Foundation Storage Cluster. Make sure that the disk is of the same size and type as the disk used during initial deployment. Procedure Log in to the OpenShift Web Console. Click Operators -> Installed Operators . Click OpenShift Data Foundation Operator. Click the Storage Systems tab. Click the Action Menu (...) on the far right of the storage system name to extend the options menu. Select Add Capacity from the options menu. Select the Storage Class . Choose the storage class which you wish to use to provision new storage devices. Click Add . To check the status, navigate to Storage -> Data Foundation and verify that Storage System in the Status card has a green tick. Verification steps Verify the Raw Capacity card. In the OpenShift Web Console, click Storage -> Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Block and File tab, check the Raw Capacity card. Note that the capacity increases based on your selections. Note The raw capacity does not take replication into account and shows the full capacity. Verify that the new OSDs and their corresponding new Persistent Volume Claims (PVCs) are created. To view the state of the newly created OSDs: Click Workloads -> Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. To view the state of the PVCs: Click Storage -> Persistent Volume Claims from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. Identify the nodes where the new OSD pods are running. <OSD-pod-name> Is the name of the OSD pod. For example: Example output: For each of the nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the selected hosts. <node-name> Is the name of the node. Check for the crypt keyword beside the ocs-deviceset names. Important Cluster reduction is supported only with the Red Hat Support Team's assistance. 5.2. Scaling up a cluster created using local storage devices To scale up an OpenShift Data Foundation cluster which was created using local storage devices, you need to add a new disk to the storage node. The new disks size must be of the same size as the disks used during the deployment because OpenShift Data Foundation does not support heterogeneous disks/OSDs. For deployments having three failure domains, you can scale up the storage by adding disks in the multiples of three, with the same number of disks coming from nodes in each of the failure domains. For example, if we scale by adding six disks, two disks are taken from nodes in each of the three failure domains. If the number of disks is not in multiples of three, it will only consume the disk to the maximum in the multiple of three while the remaining disks remain unused. For deployments having less than three failure domains, there is a flexibility to add any number of disks. Make sure to verify that flexible scaling is enabled. For information, refer to the Knowledgebase article Verify if flexible scaling is enabled . Note Flexible scaling features get enabled at the time of deployment and cannot be enabled or disabled later on. Prerequisites Administrative privilege to the OpenShift Container Platform Console. A running OpenShift Data Foundation Storage Cluster. Make sure that the disks to be used for scaling are attached to the storage node Make sure that LocalVolumeDiscovery and LocalVolumeSet objects are created. Procedure To add capacity, you can either use a storage class that you provisioned during the deployment or any other storage class that matches the filter. In the OpenShift Web Console, click Operators -> Installed Operators . Click OpenShift Data Foundation Operator. Click the Storage Systems tab. Click the Action menu (...) to the visible list to extend the options menu. Select Add Capacity from the options menu. Select the Storage Class for which you added disks or the new storage class depending on your requirement. Available Capacity displayed is based on the local disks available in storage class. Click Add . To check the status, navigate to Storage -> Data Foundation and verify that the Storage System in the Status card has a green tick. Verification steps Verify the Raw Capacity card. In the OpenShift Web Console, click Storage -> Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Block and File tab, check the Raw Capacity card. Note that the capacity increases based on your selections. Note The raw capacity does not take replication into account and shows the full capacity. Verify that the new OSDs and their corresponding new Persistent Volume Claims (PVCs) are created. To view the state of the newly created OSDs: Click Workloads -> Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. To view the state of the PVCs: Click Storage -> Persistent Volume Claims from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. Identify the nodes where the new OSD pods are running. <OSD-pod-name> Is the name of the OSD pod. For example: Example output: For each of the nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the selected host(s). <node-name> Is the name of the node. Check for the crypt keyword beside the ocs-deviceset names. Important Cluster reduction is supported only with the Red Hat Support Team's assistance. 5.3. Scaling out storage capacity on a VMware cluster 5.3.1. Adding a node to an installer-provisioned infrastructure Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. Procedure Navigate to Compute -> Machine Sets . On the machine set where you want to add nodes, select Edit Machine Count . Add the amount of nodes, and click Save . Click Compute -> Nodes and confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node. For the new node, click Action menu (...) -> Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . Note It is recommended to add 3 nodes, one each in different zones. You must add 3 nodes and perform this procedure for all of them. In case of bare metal installer-provisioned infrastructure deployment, you must expand the cluster first. For instructions, see Expanding the cluster . Verification steps Execute the following command in the terminal and verify that the new node is present in the output: On the OpenShift web console, click Workloads -> Pods , confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* 5.3.2. Adding a node to an user-provisioned infrastructure Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. Procedure Depending on the type of infrastructure, perform the following steps: Get a new machine with the required infrastructure. See Platform requirements . Create a new OpenShift Container Platform worker node using the new machine. Check for certificate signing requests (CSRs) that are in Pending state. Approve all the required CSRs for the new node. <Certificate_Name> Is the name of the CSR. Click Compute -> Nodes , confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From User interface For the new node, click Action Menu (...) -> Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From Command line interface Apply the OpenShift Data Foundation label to the new node. <new_node_name> Is the name of the new node. Verification steps Execute the following command in the terminal and verify that the new node is present in the output: On the OpenShift web console, click Workloads -> Pods , confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* 5.3.3. Adding a node using a local storage device You can add nodes to increase the storage capacity when existing worker nodes are already running at their maximum supported OSDs or when there are not enough resources to add new OSDs on the existing nodes. Add nodes in the multiple of 3, each of them in different failure domains. Though it is recommended to add nodes in multiples of 3 nodes, you have the flexibility to add one node at a time in flexible scaling deployment. See Knowledgebase article Verify if flexible scaling is enabled Note OpenShift Data Foundation does not support heterogeneous disk size and types. The new nodes to be added should have the disk of the same type and size which was used during initial OpenShift Data Foundation deployment. Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. Procedure Depending on the type of infrastructure, perform the following steps: Get a new machine with the required infrastructure. See Platform requirements . Create a new OpenShift Container Platform worker node using the new machine. Check for certificate signing requests (CSRs) that are in Pending state. Approve all the required CSRs for the new node. <Certificate_Name> Is the name of the CSR. Click Compute -> Nodes , confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From User interface For the new node, click Action Menu (...) -> Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From Command line interface Apply the OpenShift Data Foundation label to the new node. <new_node_name> Is the name of the new node. Click Operators -> Installed Operators from the OpenShift Web Console. From the Project drop-down list, make sure to select the project where the Local Storage Operator is installed. Click Local Storage . Click the Local Volume Discovery tab. Beside the LocalVolumeDiscovery , click Action menu (...) -> Edit Local Volume Discovery . In the YAML, add the hostname of the new node in the values field under the node selector. Click Save . Click the Local Volume Sets tab. Beside the LocalVolumeSet , click Action menu (...) -> Edit Local Volume Set . In the YAML, add the hostname of the new node in the values field under the node selector . Click Save . Note It is recommended to add 3 nodes, one each in different zones. You must add 3 nodes and perform this procedure for all of them. Verification steps Execute the following command in the terminal and verify that the new node is present in the output: On the OpenShift web console, click Workloads -> Pods , confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* 5.3.4. Scaling up storage capacity To scale up storage capacity: For dynamic storage devices, see Scaling up storage capacity on a cluster . For local storage devices, see Scaling up a cluster created using local storage devices Chapter 6. Scaling storage of Microsoft Azure OpenShift Data Foundation cluster To scale the storage capacity of your configured Red Hat OpenShift Data Foundation worker nodes on Microsoft Azure cluster, you can increase the capacity by adding three disks at a time. Three disks are needed since OpenShift Data Foundation uses a replica count of 3 to maintain the high availability. So the amount of storage consumed is three times the usable space. Note Usable space might vary when encryption is enabled or replica 2 pools are being used. 6.1. Scaling up storage capacity on a cluster To increase the storage capacity in a dynamically created storage cluster on an user-provisioned infrastructure, you can add storage capacity and performance to your configured Red Hat OpenShift Data Foundation worker nodes. Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. The disk should be of the same size and type as used during initial deployment. Procedure Log in to the OpenShift Web Console. Click Operators -> Installed Operators . Click OpenShift Data Foundation Operator. Click the Storage Systems tab. Click the Action Menu (...) on the far right of the storage system name to extend the options menu. Select Add Capacity from the options menu. Select the Storage Class . Choose the storage class which you wish to use to provision new storage devices. Click Add . To check the status, navigate to Storage -> Data Foundation and verify that the Storage System in the Status card has a green tick. Verification steps Verify the Raw Capacity card. In the OpenShift Web Console, click Storage -> Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Block and File tab, check the Raw Capacity card. Note that the capacity increases based on your selections. Note The raw capacity does not take replication into account and shows the full capacity. Verify that the new object storage devices (OSDs) and their corresponding new Persistent Volume Claims (PVCs) are created. To view the state of the newly created OSDs: Click Workloads -> Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. To view the state of the PVCs: Click Storage -> Persistent Volume Claims from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. Identify the nodes where the new OSD pods are running. <OSD-pod-name> Is the name of the OSD pod. For example: Example output: For each of the nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the selected hosts. <node-name> Is the name of the node. Check for the crypt keyword beside the ocs-deviceset names. Important Cluster reduction is supported only with the Red Hat Support Team's assistance. 6.2. Scaling out storage capacity on a Microsoft Azure cluster OpenShift Data Foundation is highly scalable. It can be scaled out by adding new nodes with required storage and enough hardware resources in terms of CPU and RAM. Practically there is no limit on the number of nodes which can be added but from the support perspective 2000 nodes is the limit for OpenShift Data Foundation. Scaling out storage capacity can be broken down into two steps Adding new node Scaling up the storage capacity Note OpenShift Data Foundation does not support heterogeneous OSD/Disk sizes. 6.2.1. Adding a node to an installer-provisioned infrastructure Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. Procedure Navigate to Compute -> Machine Sets . On the machine set where you want to add nodes, select Edit Machine Count . Add the amount of nodes, and click Save . Click Compute -> Nodes and confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node. For the new node, click Action menu (...) -> Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . Note It is recommended to add 3 nodes, one each in different zones. You must add 3 nodes and perform this procedure for all of them. In case of bare metal installer-provisioned infrastructure deployment, you must expand the cluster first. For instructions, see Expanding the cluster . Verification steps Execute the following command in the terminal and verify that the new node is present in the output: On the OpenShift web console, click Workloads -> Pods , confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* 6.2.2. Scaling up storage capacity To scale up storage capacity, see Scaling up storage capacity on a cluster . Chapter 7. Scaling storage capacity of GCP OpenShift Data Foundation cluster To scale the storage capacity of your configured Red Hat OpenShift Data Foundation worker nodes on GCP cluster, you can increase the capacity by adding three disks at a time. Three disks are needed since OpenShift Data Foundation uses a replica count of 3 to maintain the high availability. So the amount of storage consumed is three times the usable space. Note Usable space might vary when encryption is enabled or replica 2 pools are being used. 7.1. Scaling up storage capacity on a cluster To increase the storage capacity in a dynamically created storage cluster on an user-provisioned infrastructure, you can add storage capacity and performance to your configured Red Hat OpenShift Data Foundation worker nodes. Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. The disk should be of the same size and type as used during initial deployment. Procedure Log in to the OpenShift Web Console. Click Operators -> Installed Operators . Click OpenShift Data Foundation Operator. Click the Storage Systems tab. Click the Action Menu (...) on the far right of the storage system name to extend the options menu. Select Add Capacity from the options menu. Select the Storage Class . Choose the storage class which you wish to use to provision new storage devices. Click Add . To check the status, navigate to Storage -> Data Foundation and verify that the Storage System in the Status card has a green tick. Verification steps Verify the Raw Capacity card. In the OpenShift Web Console, click Storage -> Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Block and File tab, check the Raw Capacity card. Note that the capacity increases based on your selections. Note The raw capacity does not take replication into account and shows the full capacity. Verify that the new object storage devices (OSDs) and their corresponding new Persistent Volume Claims (PVCs) are created. To view the state of the newly created OSDs: Click Workloads -> Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. To view the state of the PVCs: Click Storage -> Persistent Volume Claims from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. Identify the nodes where the new OSD pods are running. <OSD-pod-name> Is the name of the OSD pod. For example: Example output: For each of the nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the selected hosts. <node-name> Is the name of the node. Check for the crypt keyword beside the ocs-deviceset names. Important Cluster reduction is supported only with the Red Hat Support Team's assistance. 7.2. Scaling out storage capacity on a GCP cluster OpenShift Data Foundation is highly scalable. It can be scaled out by adding new nodes with required storage and enough hardware resources in terms of CPU and RAM. Practically there is no limit on the number of nodes which can be added but from the support perspective 2000 nodes is the limit for OpenShift Data Foundation. Scaling out storage capacity can be broken down into two steps Adding new node Scaling up the storage capacity Note OpenShift Data Foundation does not support heterogeneous OSD/Disk sizes. 7.2.1. Adding a node You can add nodes to increase the storage capacity when existing worker nodes are already running at their maximum supported OSDs or there are not enough resources to add new OSDs on the existing nodes. It is always recommended to add nodes in the multiple of three, each of them in different failure domains. While it is recommended to add nodes in the multiple of three, you still have the flexibility to add one node at a time in the flexible scaling deployment. Refer to the Knowledgebase article Verify if flexible scaling is enabled . Note OpenShift Data Foundation does not support heterogeneous disk size and types. The new nodes to be added should have the disk of the same type and size which was used during OpenShift Data Foundation deployment. 7.2.1.1. Adding a node to an installer-provisioned infrastructure Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. Procedure Navigate to Compute -> Machine Sets . On the machine set where you want to add nodes, select Edit Machine Count . Add the amount of nodes, and click Save . Click Compute -> Nodes and confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node. For the new node, click Action menu (...) -> Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . Note It is recommended to add 3 nodes, one each in different zones. You must add 3 nodes and perform this procedure for all of them. In case of bare metal installer-provisioned infrastructure deployment, you must expand the cluster first. For instructions, see Expanding the cluster . Verification steps Execute the following command in the terminal and verify that the new node is present in the output: On the OpenShift web console, click Workloads -> Pods , confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* 7.2.2. Scaling up storage capacity To scale up storage capacity, see Scaling up storage capacity on a cluster . Chapter 8. Scaling storage of IBM Z or IBM LinuxONE OpenShift Data Foundation cluster 8.1. Scaling up storage by adding capacity to your OpenShift Data Foundation nodes on IBM Z or IBM LinuxONE infrastructure You can add storage capacity and performance to your configured Red Hat OpenShift Data Foundation worker nodes. Note Flexible scaling features get enabled at the time of deployment and can not be enabled or disabled later on. Prerequisites A running OpenShift Data Foundation Platform. Administrative privileges on the OpenShift Web Console. To scale using a storage class other than the one provisioned during deployment, first define an additional storage class. See Creating storage classes and pools for details. Procedure Add additional hardware resources with zFCP disks. List all the disks. Example output: A SCSI disk is represented as a zfcp-lun with the structure <device-id>:<wwpn>:<lun-id> in the ID section. The first disk is used for the operating system. The device id for the new disk can be the same. Append a new SCSI disk. Note The device ID for the new disk must be the same as the disk to be replaced. The new disk is identified with its WWPN and LUN ID. List all the FCP devices to verify the new disk is configured. Navigate to the OpenShift Web Console. Click Operators on the left navigation bar. Select Installed Operators . In the window, click OpenShift Data Foundation Operator. In the top navigation bar, scroll right and click Storage Systems tab. Click the Action menu (...) to the visible list to extend the options menu. Select Add Capacity from the options menu. The Raw Capacity field shows the size set during storage class creation. The total amount of storage consumed is three times this amount, because OpenShift Data Foundation uses a replica count of 3. Click Add . To check the status, navigate to Storage -> Data Foundation and verify that Storage System in the Status card has a green tick. Verification steps Verify the Raw Capacity card. In the OpenShift Web Console, click Storage -> Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Block and File tab, check the Raw Capacity card. Note that the capacity increases based on your selections. Note The raw capacity does not take replication into account and shows the full capacity. Verify that the new OSDs and their corresponding new Persistent Volume Claims (PVCs) are created. To view the state of the newly created OSDs: Click Workloads -> Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. To view the state of the PVCs: Click Storage -> Persistent Volume Claims from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. Identify the nodes where the new OSD pods are running. <OSD-pod-name> Is the name of the OSD pod. For example: Example output: For each of the nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the selected host(s). <node-name> Is the name of the node. Check for the crypt keyword beside the ocs-deviceset names. Important Cluster reduction is supported only with the Red Hat Support Team's assistance. 8.2. Scaling out storage capacity on a IBM Z or IBM LinuxONE cluster 8.2.1. Adding a node using a local storage device You can add nodes to increase the storage capacity when existing worker nodes are already running at their maximum supported OSDs or when there are not enough resources to add new OSDs on the existing nodes. Add nodes in the multiple of 3, each of them in different failure domains. Though it is recommended to add nodes in multiples of 3 nodes, you have the flexibility to add one node at a time in flexible scaling deployment. See Knowledgebase article Verify if flexible scaling is enabled Note OpenShift Data Foundation does not support heterogeneous disk size and types. The new nodes to be added should have the disk of the same type and size which was used during initial OpenShift Data Foundation deployment. Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. Procedure Depending on the type of infrastructure, perform the following steps: Get a new machine with the required infrastructure. See Platform requirements . Create a new OpenShift Container Platform worker node using the new machine. Check for certificate signing requests (CSRs) that are in Pending state. Approve all the required CSRs for the new node. <Certificate_Name> Is the name of the CSR. Click Compute -> Nodes , confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From User interface For the new node, click Action Menu (...) -> Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From Command line interface Apply the OpenShift Data Foundation label to the new node. <new_node_name> Is the name of the new node. Click Operators -> Installed Operators from the OpenShift Web Console. From the Project drop-down list, make sure to select the project where the Local Storage Operator is installed. Click Local Storage . Click the Local Volume Discovery tab. Beside the LocalVolumeDiscovery , click Action menu (...) -> Edit Local Volume Discovery . In the YAML, add the hostname of the new node in the values field under the node selector. Click Save . Click the Local Volume Sets tab. Beside the LocalVolumeSet , click Action menu (...) -> Edit Local Volume Set . In the YAML, add the hostname of the new node in the values field under the node selector . Click Save . Note It is recommended to add 3 nodes, one each in different zones. You must add 3 nodes and perform this procedure for all of them. Verification steps Execute the following command in the terminal and verify that the new node is present in the output: On the OpenShift web console, click Workloads -> Pods , confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* 8.2.2. Scaling up storage capacity To scale up storage capacity, see Scaling up storage capacity on a cluster . Chapter 9. Scaling storage of IBM Power OpenShift Data Foundation cluster To scale the storage capacity of your configured Red Hat OpenShift Data Foundation worker nodes on IBM Power cluster, you can increase the capacity by adding three disks at a time. Three disks are needed since OpenShift Data Foundation uses a replica count of 3 to maintain the high availability. So the amount of storage consumed is three times the usable space. Note Usable space might vary when encryption is enabled or replica 2 pools are being used. 9.1. Scaling up storage by adding capacity to your OpenShift Data Foundation nodes on IBM Power infrastructure using local storage devices In order to scale up an OpenShift Data Foundation cluster which was created using local storage devices, a new disk needs to be added to the storage node. It is recommended to have the new disks of the same size as used earlier during the deployment as OpenShift Data Foundation does not support heterogeneous disks/OSD's. You can add storage capacity (additional storage devices) to your configured local storage based OpenShift Data Foundation worker nodes on IBM Power infrastructures. Note Flexible scaling features get enabled at the time of deployment and can not be enabled or disabled later on. Prerequisites You must be logged into the OpenShift Container Platform cluster. You must have installed the local storage operator. Use the following procedure: Installing Local Storage Operator on IBM Power You must have three OpenShift Container Platform worker nodes with the same storage type and size attached to each node (for example, 0.5TB SSD) as the original OpenShift Data Foundation StorageCluster was created with. Procedure To add storage capacity to OpenShift Container Platform nodes with OpenShift Data Foundation installed, you need to Find the available devices that you want to add, that is, a minimum of one device per worker node. You can follow the procedure for finding available storage devices in the respective deployment guide. Note Make sure you perform this process for all the existing nodes (minimum of 3) for which you want to add storage. Add the additional disks to the LocalVolume custom resource (CR). Example output: Make sure to save the changes after editing the CR. Example output: You can see in this CR that new devices are added. sdx Display the newly created Persistent Volumes (PVs) with the storageclass name used in the localVolume CR. Example output: Navigate to the OpenShift Web Console. Click Operators on the left navigation bar. Select Installed Operators . In the window, click OpenShift Data Foundation Operator. In the top navigation bar, scroll right and click Storage System tab. Click the Action menu (...) to the visible list to extend the options menu. Select Add Capacity from the options menu. From this dialog box, set the Storage Class name to the name used in the localVolume CR. Available Capacity displayed is based on the local disks available in storage class. Click Add . To check the status, navigate to Storage -> Data Foundation and verify that the Storage System in the Status card has a green tick. Verification steps Verify the available Capacity. In the OpenShift Web Console, click Storage -> Data Foundation . Click the Storage Systems tab and then click on ocs-storagecluster-storagesystem . Navigate to Overview -> Block and File tab, then check the Raw Capacity card. Note that the capacity increases based on your selections. Note The raw capacity does not take replication into account and shows the full capacity. Verify that the new OSDs and their corresponding new Persistent Volume Claims (PVCs) are created. To view the state of the newly created OSDs: Click Workloads -> Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. To view the state of the PVCs: Click Storage -> Persistent Volume Claims from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. Identify the nodes where the new OSD pods are running. <OSD-pod-name> Is the name of the OSD pod. For example: Example output: For each of the nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the selected host(s). <node-name> Is the name of the node. Check for the crypt keyword beside the ocs-deviceset names. Important Cluster reduction is supported only with the Red Hat Support Team's assistance. 9.2. Scaling out storage capacity on a IBM Power cluster OpenShift Data Foundation is highly scalable. It can be scaled out by adding new nodes with required storage and enough hardware resources in terms of CPU and RAM. Practically there is no limit on the number of nodes which can be added but from the support perspective 2000 nodes is the limit for OpenShift Data Foundation. Scaling out storage capacity can be broken down into two steps: Adding new node Scaling up the storage capacity Note OpenShift Data Foundation does not support heterogeneous OSD/Disk sizes. 9.2.1. Adding a node using a local storage device on IBM Power You can add nodes to increase the storage capacity when existing worker nodes are already running at their maximum supported OSDs or when there are not enough resources to add new OSDs on the existing nodes. Add nodes in the multiple of 3, each of them in different failure domains. Though it is recommended to add nodes in multiples of 3 nodes, you have the flexibility to add one node at a time in flexible scaling deployment. See Knowledgebase article Verify if flexible scaling is enabled Note OpenShift Data Foundation does not support heterogeneous disk size and types. The new nodes to be added should have the disk of the same type and size which was used during initial OpenShift Data Foundation deployment. Prerequisites You must be logged into the OpenShift Container Platform cluster. You must have three OpenShift Container Platform worker nodes with the same storage type and size attached to each node (for example, 2TB SSD drive) as the original OpenShift Data Foundation StorageCluster was created with. Procedure Get a new IBM Power machine with the required infrastructure. See Platform requirements . Create a new OpenShift Container Platform node using the new IBM Power machine. Check for certificate signing requests (CSRs) that are in Pending state. Approve all the required CSRs for the new node. <Certificate_Name> Is the name of the CSR. Click Compute -> Nodes , confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From User interface For the new node, click Action Menu (...) -> Edit Labels . Add cluster.ocs.openshift.io/openshift-storage and click Save . From Command line interface Apply the OpenShift Data Foundation label to the new node. <new_node_name> Is the name of the new node. Click Operators -> Installed Operators from the OpenShift Web Console. From the Project drop-down list, make sure to select the project where the Local Storage Operator is installed. Click Local Storage . Click the Local Volume tab. Beside the LocalVolume , click Action menu (...) -> Edit Local Volume . In the YAML, add the hostname of the new node in the values field under the node selector . Figure 9.1. YAML showing the addition of new hostnames Click Save . Note It is recommended to add 3 nodes, one each in different zones. You must add 3 nodes and perform this procedure for all of them. Verification steps Execute the following command in the terminal and verify that the new node is present in the output: On the OpenShift web console, click Workloads -> Pods , confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* 9.2.2. Scaling up storage capacity To scale up storage capacity, see Scaling up storage capacity on a cluster .
[ "oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/ <OSD-pod-name>", "oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqm", "NODE compute-1", "oc debug node/ <node-name>", "chroot /host", "lsblk", "oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1", "oc get csr", "oc adm certificate approve <Certificate_Name>", "oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"", "oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1", "oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/ <OSD-pod-name>", "oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqm", "NODE compute-1", "oc debug node/ <node-name>", "chroot /host", "lsblk", "oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1", "oc get csr", "oc adm certificate approve <Certificate_Name>", "oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"", "oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1", "oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/ <OSD-pod-name>", "oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqm", "NODE compute-1", "oc debug node/ <node-name>", "chroot /host", "lsblk", "oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/ <OSD-pod-name>", "oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqm", "NODE compute-1", "oc debug node/ <node-name>", "chroot /host", "lsblk", "oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1", "oc get csr", "oc adm certificate approve <Certificate_Name>", "oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"", "oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1", "oc get csr", "oc adm certificate approve <Certificate_Name>", "oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"", "oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1", "oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/ <OSD-pod-name>", "oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqm", "NODE compute-1", "oc debug node/ <node-name>", "chroot /host", "lsblk", "oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1", "oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/ <OSD-pod-name>", "oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqm", "NODE compute-1", "oc debug node/ <node-name>", "chroot /host", "lsblk", "oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1", "lszdev", "TYPE ID ON PERS NAMES zfcp-host 0.0.8204 yes yes zfcp-lun 0.0.8204:0x102107630b1b5060:0x4001402900000000 yes no sda sg0 zfcp-lun 0.0.8204:0x500407630c0b50a4:0x3002b03000000000 yes yes sdb sg1 qeth 0.0.bdd0:0.0.bdd1:0.0.bdd2 yes no encbdd0 generic-ccw 0.0.0009 yes no", "chzdev -e 0.0.8204:0x400506630b1b50a4:0x3001301a00000000", "lszdev zfcp-lun TYPE ID ON PERS NAMES zfcp-lun 0.0.8204:0x102107630b1b5060:0x4001402900000000 yes no sda sg0 zfcp-lun 0.0.8204:0x500507630b1b50a4:0x4001302a00000000 yes yes sdb sg1 zfcp-lun 0.0.8204:0x400506630b1b50a4:0x3001301a00000000 yes yes sdc sg2", "oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/ <OSD-pod-name>", "oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqm", "NODE compute-1", "oc debug node/ <node-name>", "chroot /host", "lsblk", "oc get csr", "oc adm certificate approve <Certificate_Name>", "oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"", "oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1", "oc edit -n openshift-local-storage localvolume localblock", "spec: logLevel: Normal managementState: Managed nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - worker-0 - worker-1 - worker-2 storageClassDevices: - devicePaths: - /dev/sda - /dev/sdx # newly added device storageClassName: localblock volumeMode: Block", "localvolume.local.storage.openshift.io/localblock edited", "oc get pv | grep localblock | grep Available", "local-pv-a04ffd8 500Gi RWO Delete Available localblock 24s local-pv-a0ca996b 500Gi RWO Delete Available localblock 23s local-pv-c171754a 500Gi RWO Delete Available localblock 23s", "oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/ <OSD-pod-name>", "oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqm", "NODE compute-1", "oc debug node/ <node-name>", "chroot /host", "lsblk", "oc get csr", "oc adm certificate approve <Certificate_Name>", "oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"", "oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html-single/scaling_storage/scaling-up-storage-by-adding-capacity-to-your-openshift-data-foundation-nodes-on-ibm-power_ibm-power
Chapter 13. Volume cloning
Chapter 13. Volume cloning A clone is a duplicate of an existing storage volume that is used as any standard volume. You create a clone of a volume to make a point in time copy of the data. A persistent volume claim (PVC) cannot be cloned with a different size. You can create up to 512 clones per PVC for both CephFS and RADOS Block Device (RBD). 13.1. Creating a clone Prerequisites Source PVC must be in Bound state and must not be in use. Note Do not create a clone of a PVC if a Pod is using it. Doing so might cause data corruption because the PVC is not quiesced (paused). Procedure Click Storage Persistent Volume Claims from the OpenShift Web Console. To create a clone, do one of the following: Beside the desired PVC, click Action menu (...) Clone PVC . Click on the PVC that you want to clone and click Actions Clone PVC . Enter a Name for the clone. Select the access mode of your choice. Important The ReadOnlyMany (ROX) access mode is a Developer Preview feature and is subject to Developer Preview support limitations. Developer Preview releases are not intended to be run in production environments and are not supported through the Red Hat Customer Portal case management system. If you need assistance with ReadOnlyMany feature, reach out to the [email protected] mailing list and a member of the Red Hat Development Team will assist you as quickly as possible based on availability and work schedules. See Creating a clone or restoring a snapshot with the new readonly access mode to use the ROX access mode. Enter the required size of the clone. Select the storage class in which you want to create the clone. The storage class can be any RBD storage class and it need not necessarily be the same as the parent PVC. Click Clone . You are redirected to the new PVC details page. Wait for the cloned PVC status to become Bound . The cloned PVC is now available to be consumed by the pods. This cloned PVC is independent of its dataSource PVC.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/managing_and_allocating_storage_resources/volume-cloning_rhodf
Chapter 5. Compliance Operator
Chapter 5. Compliance Operator 5.1. Compliance Operator overview The OpenShift Container Platform Compliance Operator assists users by automating the inspection of numerous technical implementations and compares those against certain aspects of industry standards, benchmarks, and baselines; the Compliance Operator is not an auditor. In order to be compliant or certified under these various standards, you need to engage an authorized auditor such as a Qualified Security Assessor (QSA), Joint Authorization Board (JAB), or other industry recognized regulatory authority to assess your environment. The Compliance Operator makes recommendations based on generally available information and practices regarding such standards and may assist with remediations, but actual compliance is your responsibility. You are required to work with an authorized auditor to achieve compliance with a standard. For the latest updates, see the Compliance Operator release notes . For more information on compliance support for all Red Hat products, see Product Compliance . Compliance Operator concepts Understanding the Compliance Operator Understanding the Custom Resource Definitions Compliance Operator management Installing the Compliance Operator Updating the Compliance Operator Managing the Compliance Operator Uninstalling the Compliance Operator Compliance Operator scan management Supported compliance profiles Compliance Operator scans Tailoring the Compliance Operator Retrieving Compliance Operator raw results Managing Compliance Operator remediation Performing advanced Compliance Operator tasks Troubleshooting the Compliance Operator Using the oc-compliance plugin 5.2. Compliance Operator release notes The Compliance Operator lets OpenShift Container Platform administrators describe the required compliance state of a cluster and provides them with an overview of gaps and ways to remediate them. These release notes track the development of the Compliance Operator in the OpenShift Container Platform. For an overview of the Compliance Operator, see Understanding the Compliance Operator . To access the latest release, see Updating the Compliance Operator . For more information on compliance support for all Red Hat products, see Product Compliance . 5.2.1. OpenShift Compliance Operator 1.6.2 The following advisory is available for the OpenShift Compliance Operator 1.6.2: RHBA-2025:2659 - OpenShift Compliance Operator 1.6.2 update CVE-2024-45338 is resolved in the Compliance Operator 1.6.2 release. ( CVE-2024-45338 ) 5.2.2. OpenShift Compliance Operator 1.6.1 The following advisory is available for the OpenShift Compliance Operator 1.6.1: RHBA-2024:10367 - OpenShift Compliance Operator 1.6.1 update This update includes upgraded dependencies in underlying base images. 5.2.3. OpenShift Compliance Operator 1.6.0 The following advisory is available for the OpenShift Compliance Operator 1.6.0: RHBA-2024:6761 - OpenShift Compliance Operator 1.6.0 bug fix and enhancement update 5.2.3.1. New features and enhancements The Compliance Operator now contains supported profiles for Payment Card Industry Data Security Standard (PCI-DSS) version 4. For more information, see Supported compliance profiles . The Compliance Operator now contains supported profiles for Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) V2R1. For more information, see Supported compliance profiles . A must-gather extension is now available for the Compliance Operator installed on x86 , ppc64le , and s390x architectures. The must-gather tool provides crucial configuration details to Red Hat Customer Support and engineering. For more information, see Using the must-gather tool for the Compliance Operator . 5.2.3.2. Bug fixes Before this release, a misleading description in the ocp4-route-ip-whitelist rule resulted in misunderstanding, causing potential for misconfigurations. With this update, the rule is now more clearly defined. ( CMP-2485 ) Previously, the reporting of all of the ComplianceCheckResults for a DONE status ComplianceScan was incomplete. With this update, annotation has been added to report the number of total ComplianceCheckResults for a ComplianceScan with a DONE status. ( CMP-2615 ) Previously, the ocp4-cis-scc-limit-container-allowed-capabilities rule description contained ambiguous guidelines, leading to confusion among users. With this update, the rule description and actionable steps are clarified. ( OCPBUGS-17828 ) Before this update, sysctl configurations caused certain auto remediations for RHCOS4 rules to fail scans in affected clusters. With this update, the correct sysctl settings are applied and RHCOS4 rules for FedRAMP High profiles pass scans correctly. ( OCPBUGS-19690 ) Before this update, an issue with a jq filter caused errors with the rhacs-operator-controller-manager deployment during compliance checks. With this update, the jq filter expression is updated and the rhacs-operator-controller-manager deployment is exempt from compliance checks pertaining to container resource limits, eliminating false positive results. ( OCPBUGS-19690 ) Before this update, rhcos4-high and rhcos4-moderate profiles checked values of an incorrectly titled configuration file. As a result, some scan checks could fail. With this update, the rhcos4 profiles now check the correct configuration file and scans pass correctly. ( OCPBUGS-31674 ) Previously, the accessokenInactivityTimeoutSeconds variable used in the oauthclient-inactivity-timeout rule was immutable, leading to a FAIL status when performing DISA STIG scans. With this update, proper enforcement of the accessTokenInactivityTimeoutSeconds variable operates correctly and a PASS status is now possible. ( OCPBUGS-32551 ) Before this update, some annotations for rules were not updated, displaying the incorrect control standards. With this update, annotations for rules are updated correctly, ensuring the correct control standards are displayed. ( OCPBUGS-34982 ) Previously, when upgrading to Compliance Operator 1.5.1, an incorrectly referenced secret in a ServiceMonitor configuration caused integration issues with the Prometheus Operator. With this update, the Compliance Operator will accurately reference the secret containing the token for ServiceMonitor metrics. ( OCPBUGS-39417 ) 5.2.4. OpenShift Compliance Operator 1.5.1 The following advisory is available for the OpenShift Compliance Operator 1.5.1: RHBA-2024:5956 - OpenShift Compliance Operator 1.5.1 bug fix and enhancement update 5.2.5. OpenShift Compliance Operator 1.5.0 The following advisory is available for the OpenShift Compliance Operator 1.5.0: RHBA-2024:3533 - OpenShift Compliance Operator 1.5.0 bug fix and enhancement update 5.2.5.1. New features and enhancements With this update, the Compliance Operator provides a unique profile ID for easier programmatic use. ( CMP-2450 ) With this release, the Compliance Operator is now tested and supported on the ROSA HCP environment. The Compliance Operator loads only Node profiles when running on ROSA HCP. This is because a Red Hat managed platform restricts access to the control plane, which makes Platform profiles irrelevant to the operator's function.( CMP-2581 ) 5.2.5.2. Bug fixes CVE-2024-2961 is resolved in the Compliance Operator 1.5.0 release. ( CVE-2024-2961 ) Previously, for ROSA HCP systems, profile listings were incorrect. This update allows the Compliance Operator to provide correct profile output. ( OCPBUGS-34535 ) With this release, namespaces can be excluded from the ocp4-configure-network-policies-namespaces check by setting the ocp4-var-network-policies-namespaces-exempt-regex variable in the tailored profile. ( CMP-2543 ) 5.2.6. OpenShift Compliance Operator 1.4.1 The following advisory is available for the OpenShift Compliance Operator 1.4.1: RHBA-2024:1830 - OpenShift Compliance Operator bug fix and enhancement update 5.2.6.1. New features and enhancements As of this release, the Compliance Operator now provides the CIS OpenShift 1.5.0 profile rules. ( CMP-2447 ) With this update, the Compliance Operator now provides OCP4 STIG ID and SRG with the profile rules. ( CMP-2401 ) With this update, obsolete rules being applied to s390x have been removed. ( CMP-2471 ) 5.2.6.2. Bug fixes Previously, for Red Hat Enterprise Linux CoreOS (RHCOS) systems using Red Hat Enterprise Linux (RHEL) 9, application of the ocp4-kubelet-enable-protect-kernel-sysctl-file-exist rule failed. This update replaces the rule with ocp4-kubelet-enable-protect-kernel-sysctl . Now, after auto remediation is applied, RHEL 9-based RHCOS systems will show PASS upon the application of this rule. ( OCPBUGS-13589 ) Previously, after applying compliance remediations using profile rhcos4-e8 , the nodes were no longer accessible using SSH to the core user account. With this update, nodes remain accessible through SSH using the `sshkey1 option. ( OCPBUGS-18331 ) Previously, the STIG profile was missing rules from CaC that fulfill requirements on the published STIG for OpenShift Container Platform. With this update, upon remediation, the cluster satisfies STIG requirements that can be remediated using Compliance Operator. ( OCPBUGS-26193 ) Previously, creating a ScanSettingBinding object with profiles of different types for multiple products bypassed a restriction against multiple products types in a binding. With this update, the product validation now allows multiple products regardless of the of profile types in the ScanSettingBinding object. ( OCPBUGS-26229 ) Previously, running the rhcos4-service-debug-shell-disabled rule showed as FAIL even after auto-remediation was applied. With this update, running the rhcos4-service-debug-shell-disabled rule now shows PASS after auto-remediation is applied. ( OCPBUGS-28242 ) With this update, instructions for the use of the rhcos4-banner-etc-issue rule are enhanced to provide more detail. ( OCPBUGS-28797 ) Previously the api_server_api_priority_flowschema_catch_all rule provided FAIL status on OpenShift Container Platform 4.16 clusters. With this update, the api_server_api_priority_flowschema_catch_all rule provides PASS status on OpenShift Container Platform 4.16 clusters. ( OCPBUGS-28918 ) Previously, when a profile was removed from a completed scan shown in a ScanSettingBinding (SSB) object, the Compliance Operator did not remove the old scan. Afterward, when launching a new SSB using the deleted profile, the Compliance Operator failed to update the result. With this release of the Compliance Operator, the new SSB now shows the new compliance check result. ( OCPBUGS-29272 ) Previously, on ppc64le architecture, the metrics service was not created. With this update, when deploying the Compliance Operator v1.4.1 on ppc64le architecture, the metrics service is now created correctly. ( OCPBUGS-32797 ) Previously, on a HyperShift hosted cluster, a scan with the ocp4-pci-dss profile will run into an unrecoverable error due to a filter cannot iterate issue. With this release, the scan for the ocp4-pci-dss profile will reach done status and return either a Compliance or Non-Compliance test result. ( OCPBUGS-33067 ) 5.2.7. OpenShift Compliance Operator 1.4.0 The following advisory is available for the OpenShift Compliance Operator 1.4.0: RHBA-2023:7658 - OpenShift Compliance Operator bug fix and enhancement update 5.2.7.1. New features and enhancements With this update, clusters which use custom node pools outside the default worker and master node pools no longer need to supply additional variables to ensure Compliance Operator aggregates the configuration file for that node pool. Users can now pause scan schedules by setting the ScanSetting.suspend attribute to True . This allows users to suspend a scan schedule and reactivate it without the need to delete and re-create the ScanSettingBinding . This simplifies pausing scan schedules during maintenance periods. ( CMP-2123 ) Compliance Operator now supports an optional version attribute on Profile custom resources. ( CMP-2125 ) Compliance Operator now supports profile names in ComplianceRules . ( CMP-2126 ) Compliance Operator compatibility with improved cronjob API improvements is available in this release. ( CMP-2310 ) 5.2.7.2. Bug fixes Previously, on a cluster with Windows nodes, some rules will FAIL after auto remediation is applied because the Windows nodes were not skipped by the compliance scan. With this release, Windows nodes are correctly skipped when scanning. ( OCPBUGS-7355 ) With this update, rprivate default mount propagation is now handled correctly for root volume mounts of pods that rely on multipathing. ( OCPBUGS-17494 ) Previously, the Compliance Operator would generate a remediation for coreos_vsyscall_kernel_argument without reconciling the rule even while applying the remediation. With release 1.4.0, the coreos_vsyscall_kernel_argument rule properly evaluates kernel arguments and generates an appropriate remediation.( OCPBUGS-8041 ) Before this update, rule rhcos4-audit-rules-login-events-faillock would fail even after auto-remediation has been applied. With this update, rhcos4-audit-rules-login-events-faillock failure locks are now applied correctly after auto-remediation. ( OCPBUGS-24594 ) Previously, upgrades from Compliance Operator 1.3.1 to Compliance Operator 1.4.0 would cause OVS rules scan results to go from PASS to NOT-APPLICABLE . With this update, OVS rules scan results now show PASS ( OCPBUGS-25323 ) 5.2.8. OpenShift Compliance Operator 1.3.1 The following advisory is available for the OpenShift Compliance Operator 1.3.1: RHBA-2023:5669 - OpenShift Compliance Operator bug fix and enhancement update This update addresses a CVE in an underlying dependency. 5.2.8.1. New features and enhancements You can install and use the Compliance Operator in an OpenShift Container Platform cluster running in FIPS mode. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 5.2.8.2. Known issue On a cluster with Windows nodes, some rules will FAIL after auto remediation is applied because the Windows nodes are not skipped by the compliance scan. This differs from the expected results because the Windows nodes must be skipped when scanning. ( OCPBUGS-7355 ) 5.2.9. OpenShift Compliance Operator 1.3.0 The following advisory is available for the OpenShift Compliance Operator 1.3.0: RHBA-2023:5102 - OpenShift Compliance Operator enhancement update 5.2.9.1. New features and enhancements The Defense Information Systems Agency Security Technical Implementation Guide (DISA-STIG) for OpenShift Container Platform is now available from Compliance Operator 1.3.0. See Supported compliance profiles for additional information. Compliance Operator 1.3.0 now supports IBM Power(R) and IBM Z(R) for NIST 800-53 Moderate-Impact Baseline for OpenShift Container Platform platform and node profiles. 5.2.10. OpenShift Compliance Operator 1.2.0 The following advisory is available for the OpenShift Compliance Operator 1.2.0: RHBA-2023:4245 - OpenShift Compliance Operator enhancement update 5.2.10.1. New features and enhancements The CIS OpenShift Container Platform 4 Benchmark v1.4.0 profile is now available for platform and node applications. To locate the CIS OpenShift Container Platform v4 Benchmark, go to CIS Benchmarks and click Download Latest CIS Benchmark , where you can then register to download the benchmark. Important Upgrading to Compliance Operator 1.2.0 will overwrite the CIS OpenShift Container Platform 4 Benchmark 1.1.0 profiles. If your OpenShift Container Platform environment contains existing cis and cis-node remediations, there might be some differences in scan results after upgrading to Compliance Operator 1.2.0. Additional clarity for auditing security context constraints (SCCs) is now available for the scc-limit-container-allowed-capabilities rule. 5.2.11. OpenShift Compliance Operator 1.1.0 The following advisory is available for the OpenShift Compliance Operator 1.1.0: RHBA-2023:3630 - OpenShift Compliance Operator bug fix and enhancement update 5.2.11.1. New features and enhancements A start and end timestamp is now available in the ComplianceScan custom resource definition (CRD) status. The Compliance Operator can now be deployed on hosted control planes using the OperatorHub by creating a Subscription file. For more information, see Installing the Compliance Operator on hosted control planes . 5.2.11.2. Bug fixes Before this update, some Compliance Operator rule instructions were not present. After this update, instructions are improved for the following rules: classification_banner oauth_login_template_set oauth_logout_url_set oauth_provider_selection_set ocp_allowed_registries ocp_allowed_registries_for_import ( OCPBUGS-10473 ) Before this update, check accuracy and rule instructions were unclear. After this update, the check accuracy and instructions are improved for the following sysctl rules: kubelet-enable-protect-kernel-sysctl kubelet-enable-protect-kernel-sysctl-kernel-keys-root-maxbytes kubelet-enable-protect-kernel-sysctl-kernel-keys-root-maxkeys kubelet-enable-protect-kernel-sysctl-kernel-panic kubelet-enable-protect-kernel-sysctl-kernel-panic-on-oops kubelet-enable-protect-kernel-sysctl-vm-overcommit-memory kubelet-enable-protect-kernel-sysctl-vm-panic-on-oom ( OCPBUGS-11334 ) Before this update, the ocp4-alert-receiver-configured rule did not include instructions. With this update, the ocp4-alert-receiver-configured rule now includes improved instructions. ( OCPBUGS-7307 ) Before this update, the rhcos4-sshd-set-loglevel-info rule would fail for the rhcos4-e8 profile. With this update, the remediation for the sshd-set-loglevel-info rule was updated to apply the correct configuration changes, allowing subsequent scans to pass after the remediation is applied. ( OCPBUGS-7816 ) Before this update, a new installation of OpenShift Container Platform with the latest Compliance Operator install failed on the scheduler-no-bind-address rule. With this update, the scheduler-no-bind-address rule has been disabled on newer versions of OpenShift Container Platform since the parameter was removed. ( OCPBUGS-8347 ) 5.2.12. OpenShift Compliance Operator 1.0.0 The following advisory is available for the OpenShift Compliance Operator 1.0.0: RHBA-2023:1682 - OpenShift Compliance Operator bug fix update 5.2.12.1. New features and enhancements The Compliance Operator is now stable and the release channel is upgraded to stable . Future releases will follow Semantic Versioning . To access the latest release, see Updating the Compliance Operator . 5.2.12.2. Bug fixes Before this update, the compliance_operator_compliance_scan_error_total metric had an ERROR label with a different value for each error message. With this update, the compliance_operator_compliance_scan_error_total metric does not increase in values. ( OCPBUGS-1803 ) Before this update, the ocp4-api-server-audit-log-maxsize rule would result in a FAIL state. With this update, the error message has been removed from the metric, decreasing the cardinality of the metric in line with best practices. ( OCPBUGS-7520 ) Before this update, the rhcos4-enable-fips-mode rule description was misleading that FIPS could be enabled after installation. With this update, the rhcos4-enable-fips-mode rule description clarifies that FIPS must be enabled at install time. ( OCPBUGS-8358 ) 5.2.13. OpenShift Compliance Operator 0.1.61 The following advisory is available for the OpenShift Compliance Operator 0.1.61: RHBA-2023:0557 - OpenShift Compliance Operator bug fix update 5.2.13.1. New features and enhancements The Compliance Operator now supports timeout configuration for Scanner Pods. The timeout is specified in the ScanSetting object. If the scan is not completed within the timeout, the scan retries until the maximum number of retries is reached. See Configuring ScanSetting timeout for more information. 5.2.13.2. Bug fixes Before this update, Compliance Operator remediations required variables as inputs. Remediations without variables set were applied cluster-wide and resulted in stuck nodes, even though it appeared the remediation applied correctly. With this update, the Compliance Operator validates if a variable needs to be supplied using a TailoredProfile for a remediation. ( OCPBUGS-3864 ) Before this update, the instructions for ocp4-kubelet-configure-tls-cipher-suites were incomplete, requiring users to refine the query manually. With this update, the query provided in ocp4-kubelet-configure-tls-cipher-suites returns the actual results to perform the audit steps. ( OCPBUGS-3017 ) Before this update, system reserved parameters were not generated in kubelet configuration files, causing the Compliance Operator to fail to unpause the machine config pool. With this update, the Compliance Operator omits system reserved parameters during machine configuration pool evaluation. ( OCPBUGS-4445 ) Before this update, ComplianceCheckResult objects did not have correct descriptions. With this update, the Compliance Operator sources the ComplianceCheckResult information from the rule description. ( OCPBUGS-4615 ) Before this update, the Compliance Operator did not check for empty kubelet configuration files when parsing machine configurations. As a result, the Compliance Operator would panic and crash. With this update, the Compliance Operator implements improved checking of the kubelet configuration data structure and only continues if it is fully rendered. ( OCPBUGS-4621 ) Before this update, the Compliance Operator generated remediations for kubelet evictions based on machine config pool name and a grace period, resulting in multiple remediations for a single eviction rule. With this update, the Compliance Operator applies all remediations for a single rule. ( OCPBUGS-4338 ) Before this update, a regression occurred when attempting to create a ScanSettingBinding that was using a TailoredProfile with a non-default MachineConfigPool marked the ScanSettingBinding as Failed . With this update, functionality is restored and custom ScanSettingBinding using a TailoredProfile performs correctly. ( OCPBUGS-6827 ) Before this update, some kubelet configuration parameters did not have default values. With this update, the following parameters contain default values ( OCPBUGS-6708 ): ocp4-cis-kubelet-enable-streaming-connections ocp4-cis-kubelet-eviction-thresholds-set-hard-imagefs-available ocp4-cis-kubelet-eviction-thresholds-set-hard-imagefs-inodesfree ocp4-cis-kubelet-eviction-thresholds-set-hard-memory-available ocp4-cis-kubelet-eviction-thresholds-set-hard-nodefs-available Before this update, the selinux_confinement_of_daemons rule failed running on the kubelet because of the permissions necessary for the kubelet to run. With this update, the selinux_confinement_of_daemons rule is disabled. ( OCPBUGS-6968 ) 5.2.14. OpenShift Compliance Operator 0.1.59 The following advisory is available for the OpenShift Compliance Operator 0.1.59: RHBA-2022:8538 - OpenShift Compliance Operator bug fix update 5.2.14.1. New features and enhancements The Compliance Operator now supports Payment Card Industry Data Security Standard (PCI-DSS) ocp4-pci-dss and ocp4-pci-dss-node profiles on the ppc64le architecture. 5.2.14.2. Bug fixes Previously, the Compliance Operator did not support the Payment Card Industry Data Security Standard (PCI DSS) ocp4-pci-dss and ocp4-pci-dss-node profiles on different architectures such as ppc64le . Now, the Compliance Operator supports ocp4-pci-dss and ocp4-pci-dss-node profiles on the ppc64le architecture. ( OCPBUGS-3252 ) Previously, after the recent update to version 0.1.57, the rerunner service account (SA) was no longer owned by the cluster service version (CSV), which caused the SA to be removed during the Operator upgrade. Now, the CSV owns the rerunner SA in 0.1.59, and upgrades from any version will not result in a missing SA. ( OCPBUGS-3452 ) 5.2.15. OpenShift Compliance Operator 0.1.57 The following advisory is available for the OpenShift Compliance Operator 0.1.57: RHBA-2022:6657 - OpenShift Compliance Operator bug fix update 5.2.15.1. New features and enhancements KubeletConfig checks changed from Node to Platform type. KubeletConfig checks the default configuration of the KubeletConfig . The configuration files are aggregated from all nodes into a single location per node pool. See Evaluating KubeletConfig rules against default configuration values . The ScanSetting Custom Resource now allows users to override the default CPU and memory limits of scanner pods through the scanLimits attribute. For more information, see Increasing Compliance Operator resource limits . A PriorityClass object can now be set through ScanSetting . This ensures the Compliance Operator is prioritized and minimizes the chance that the cluster falls out of compliance. For more information, see Setting PriorityClass for ScanSetting scans . 5.2.15.2. Bug fixes Previously, the Compliance Operator hard-coded notifications to the default openshift-compliance namespace. If the Operator were installed in a non-default namespace, the notifications would not work as expected. Now, notifications work in non-default openshift-compliance namespaces. ( BZ#2060726 ) Previously, the Compliance Operator was unable to evaluate default configurations used by kubelet objects, resulting in inaccurate results and false positives. This new feature evaluates the kubelet configuration and now reports accurately. ( BZ#2075041 ) Previously, the Compliance Operator reported the ocp4-kubelet-configure-event-creation rule in a FAIL state after applying an automatic remediation because the eventRecordQPS value was set higher than the default value. Now, the ocp4-kubelet-configure-event-creation rule remediation sets the default value, and the rule applies correctly. ( BZ#2082416 ) The ocp4-configure-network-policies rule requires manual intervention to perform effectively. New descriptive instructions and rule updates increase applicability of the ocp4-configure-network-policies rule for clusters using Calico CNIs. ( BZ#2091794 ) Previously, the Compliance Operator would not clean up pods used to scan infrastructure when using the debug=true option in the scan settings. This caused pods to be left on the cluster even after deleting the ScanSettingBinding . Now, pods are always deleted when a ScanSettingBinding is deleted.( BZ#2092913 ) Previously, the Compliance Operator used an older version of the operator-sdk command that caused alerts about deprecated functionality. Now, an updated version of the operator-sdk command is included and there are no more alerts for deprecated functionality. ( BZ#2098581 ) Previously, the Compliance Operator would fail to apply remediations if it could not determine the relationship between kubelet and machine configurations. Now, the Compliance Operator has improved handling of the machine configurations and is able to determine if a kubelet configuration is a subset of a machine configuration. ( BZ#2102511 ) Previously, the rule for ocp4-cis-node-master-kubelet-enable-cert-rotation did not properly describe success criteria. As a result, the requirements for RotateKubeletClientCertificate were unclear. Now, the rule for ocp4-cis-node-master-kubelet-enable-cert-rotation reports accurately regardless of the configuration present in the kubelet configuration file. ( BZ#2105153 ) Previously, the rule for checking idle streaming timeouts did not consider default values, resulting in inaccurate rule reporting. Now, more robust checks ensure increased accuracy in results based on default configuration values. ( BZ#2105878 ) Previously, the Compliance Operator would fail to fetch API resources when parsing machine configurations without Ignition specifications, which caused the api-check-pods processes to crash loop. Now, the Compliance Operator handles Machine Config Pools that do not have Ignition specifications correctly. ( BZ#2117268 ) Previously, rules evaluating the modprobe configuration would fail even after applying remediations due to a mismatch in values for the modprobe configuration. Now, the same values are used for the modprobe configuration in checks and remediations, ensuring consistent results. ( BZ#2117747 ) 5.2.15.3. Deprecations Specifying Install into all namespaces in the cluster or setting the WATCH_NAMESPACES environment variable to "" no longer affects all namespaces. Any API resources installed in namespaces not specified at the time of Compliance Operator installation is no longer be operational. API resources might require creation in the selected namespace, or the openshift-compliance namespace by default. This change improves the Compliance Operator's memory usage. 5.2.16. OpenShift Compliance Operator 0.1.53 The following advisory is available for the OpenShift Compliance Operator 0.1.53: RHBA-2022:5537 - OpenShift Compliance Operator bug fix update 5.2.16.1. Bug fixes Previously, the ocp4-kubelet-enable-streaming-connections rule contained an incorrect variable comparison, resulting in false positive scan results. Now, the Compliance Operator provides accurate scan results when setting streamingConnectionIdleTimeout . ( BZ#2069891 ) Previously, group ownership for /etc/openvswitch/conf.db was incorrect on IBM Z(R) architectures, resulting in ocp4-cis-node-worker-file-groupowner-ovs-conf-db check failures. Now, the check is marked NOT-APPLICABLE on IBM Z(R) architecture systems. ( BZ#2072597 ) Previously, the ocp4-cis-scc-limit-container-allowed-capabilities rule reported in a FAIL state due to incomplete data regarding the security context constraints (SCC) rules in the deployment. Now, the result is MANUAL , which is consistent with other checks that require human intervention. ( BZ#2077916 ) Previously, the following rules failed to account for additional configuration paths for API servers and TLS certificates and keys, resulting in reported failures even if the certificates and keys were set properly: ocp4-cis-api-server-kubelet-client-cert ocp4-cis-api-server-kubelet-client-key ocp4-cis-kubelet-configure-tls-cert ocp4-cis-kubelet-configure-tls-key Now, the rules report accurately and observe legacy file paths specified in the kubelet configuration file. ( BZ#2079813 ) Previously, the content_rule_oauth_or_oauthclient_inactivity_timeout rule did not account for a configurable timeout set by the deployment when assessing compliance for timeouts. This resulted in the rule failing even if the timeout was valid. Now, the Compliance Operator uses the var_oauth_inactivity_timeout variable to set valid timeout length. ( BZ#2081952 ) Previously, the Compliance Operator used administrative permissions on namespaces not labeled appropriately for privileged use, resulting in warning messages regarding pod security-level violations. Now, the Compliance Operator has appropriate namespace labels and permission adjustments to access results without violating permissions. ( BZ#2088202 ) Previously, applying auto remediations for rhcos4-high-master-sysctl-kernel-yama-ptrace-scope and rhcos4-sysctl-kernel-core-pattern resulted in subsequent failures of those rules in scan results, even though they were remediated. Now, the rules report PASS accurately, even after remediations are applied.( BZ#2094382 ) Previously, the Compliance Operator would fail in a CrashLoopBackoff state because of out-of-memory exceptions. Now, the Compliance Operator is improved to handle large machine configuration data sets in memory and function correctly. ( BZ#2094854 ) 5.2.16.2. Known issue When "debug":true is set within the ScanSettingBinding object, the pods generated by the ScanSettingBinding object are not removed when that binding is deleted. As a workaround, run the following command to delete the remaining pods: USD oc delete pods -l compliance.openshift.io/scan-name=ocp4-cis ( BZ#2092913 ) 5.2.17. OpenShift Compliance Operator 0.1.52 The following advisory is available for the OpenShift Compliance Operator 0.1.52: RHBA-2022:4657 - OpenShift Compliance Operator bug fix update 5.2.17.1. New features and enhancements The FedRAMP high SCAP profile is now available for use in OpenShift Container Platform environments. For more information, See Supported compliance profiles . 5.2.17.2. Bug fixes Previously, the OpenScap container would crash due to a mount permission issue in a security environment where DAC_OVERRIDE capability is dropped. Now, executable mount permissions are applied to all users. ( BZ#2082151 ) Previously, the compliance rule ocp4-configure-network-policies could be configured as MANUAL . Now, compliance rule ocp4-configure-network-policies is set to AUTOMATIC . ( BZ#2072431 ) Previously, the Cluster Autoscaler would fail to scale down because the Compliance Operator scan pods were never removed after a scan. Now, the pods are removed from each node by default unless explicitly saved for debugging purposes. ( BZ#2075029 ) Previously, applying the Compliance Operator to the KubeletConfig would result in the node going into a NotReady state due to unpausing the Machine Config Pools too early. Now, the Machine Config Pools are unpaused appropriately and the node operates correctly. ( BZ#2071854 ) Previously, the Machine Config Operator used base64 instead of url-encoded code in the latest release, causing Compliance Operator remediation to fail. Now, the Compliance Operator checks encoding to handle both base64 and url-encoded Machine Config code and the remediation applies correctly. ( BZ#2082431 ) 5.2.17.3. Known issue When "debug":true is set within the ScanSettingBinding object, the pods generated by the ScanSettingBinding object are not removed when that binding is deleted. As a workaround, run the following command to delete the remaining pods: USD oc delete pods -l compliance.openshift.io/scan-name=ocp4-cis ( BZ#2092913 ) 5.2.18. OpenShift Compliance Operator 0.1.49 The following advisory is available for the OpenShift Compliance Operator 0.1.49: RHBA-2022:1148 - OpenShift Compliance Operator bug fix and enhancement update 5.2.18.1. New features and enhancements The Compliance Operator is now supported on the following architectures: IBM Power(R) IBM Z(R) IBM(R) LinuxONE 5.2.18.2. Bug fixes Previously, the openshift-compliance content did not include platform-specific checks for network types. As a result, OVN- and SDN-specific checks would show as failed instead of not-applicable based on the network configuration. Now, new rules contain platform checks for networking rules, resulting in a more accurate assessment of network-specific checks. ( BZ#1994609 ) Previously, the ocp4-moderate-routes-protected-by-tls rule incorrectly checked TLS settings that results in the rule failing the check, even if the connection secure SSL/TLS protocol. Now, the check properly evaluates TLS settings that are consistent with the networking guidance and profile recommendations. ( BZ#2002695 ) Previously, ocp-cis-configure-network-policies-namespace used pagination when requesting namespaces. This caused the rule to fail because the deployments truncated lists of more than 500 namespaces. Now, the entire namespace list is requested, and the rule for checking configured network policies works for deployments with more than 500 namespaces. ( BZ#2038909 ) Previously, remediations using the sshd jinja macros were hard-coded to specific sshd configurations. As a result, the configurations were inconsistent with the content the rules were checking for and the check would fail. Now, the sshd configuration is parameterized and the rules apply successfully. ( BZ#2049141 ) Previously, the ocp4-cluster-version-operator-verify-integrity always checked the first entry in the Cluter Version Operator (CVO) history. As a result, the upgrade would fail in situations where subsequent versions of OpenShift Container Platform would be verified. Now, the compliance check result for ocp4-cluster-version-operator-verify-integrity is able to detect verified versions and is accurate with the CVO history. ( BZ#2053602 ) Previously, the ocp4-api-server-no-adm-ctrl-plugins-disabled rule did not check for a list of empty admission controller plugins. As a result, the rule would always fail, even if all admission plugins were enabled. Now, more robust checking of the ocp4-api-server-no-adm-ctrl-plugins-disabled rule accurately passes with all admission controller plugins enabled. ( BZ#2058631 ) Previously, scans did not contain platform checks for running against Linux worker nodes. As a result, running scans against worker nodes that were not Linux-based resulted in a never ending scan loop. Now, the scan schedules appropriately based on platform type and labels complete successfully. ( BZ#2056911 ) 5.2.19. OpenShift Compliance Operator 0.1.48 The following advisory is available for the OpenShift Compliance Operator 0.1.48: RHBA-2022:0416 - OpenShift Compliance Operator bug fix and enhancement update 5.2.19.1. Bug fixes Previously, some rules associated with extended Open Vulnerability and Assessment Language (OVAL) definitions had a checkType of None . This was because the Compliance Operator was not processing extended OVAL definitions when parsing rules. With this update, content from extended OVAL definitions is parsed so that these rules now have a checkType of either Node or Platform . ( BZ#2040282 ) Previously, a manually created MachineConfig object for KubeletConfig prevented a KubeletConfig object from being generated for remediation, leaving the remediation in the Pending state. With this release, a KubeletConfig object is created by the remediation, regardless if there is a manually created MachineConfig object for KubeletConfig . As a result, KubeletConfig remediations now work as expected. ( BZ#2040401 ) 5.2.20. OpenShift Compliance Operator 0.1.47 The following advisory is available for the OpenShift Compliance Operator 0.1.47: RHBA-2022:0014 - OpenShift Compliance Operator bug fix and enhancement update 5.2.20.1. New features and enhancements The Compliance Operator now supports the following compliance benchmarks for the Payment Card Industry Data Security Standard (PCI DSS): ocp4-pci-dss ocp4-pci-dss-node Additional rules and remediations for FedRAMP moderate impact level are added to the OCP4-moderate, OCP4-moderate-node, and rhcos4-moderate profiles. Remediations for KubeletConfig are now available in node-level profiles. 5.2.20.2. Bug fixes Previously, if your cluster was running OpenShift Container Platform 4.6 or earlier, remediations for USBGuard-related rules would fail for the moderate profile. This is because the remediations created by the Compliance Operator were based on an older version of USBGuard that did not support drop-in directories. Now, invalid remediations for USBGuard-related rules are not created for clusters running OpenShift Container Platform 4.6. If your cluster is using OpenShift Container Platform 4.6, you must manually create remediations for USBGuard-related rules. Additionally, remediations are created only for rules that satisfy minimum version requirements. ( BZ#1965511 ) Previously, when rendering remediations, the compliance operator would check that the remediation was well-formed by using a regular expression that was too strict. As a result, some remediations, such as those that render sshd_config , would not pass the regular expression check and therefore, were not created. The regular expression was found to be unnecessary and removed. Remediations now render correctly. ( BZ#2033009 ) 5.2.21. OpenShift Compliance Operator 0.1.44 The following advisory is available for the OpenShift Compliance Operator 0.1.44: RHBA-2021:4530 - OpenShift Compliance Operator bug fix and enhancement update 5.2.21.1. New features and enhancements In this release, the strictNodeScan option is now added to the ComplianceScan , ComplianceSuite and ScanSetting CRs. This option defaults to true which matches the behavior, where an error occurred if a scan was not able to be scheduled on a node. Setting the option to false allows the Compliance Operator to be more permissive about scheduling scans. Environments with ephemeral nodes can set the strictNodeScan value to false, which allows a compliance scan to proceed, even if some of the nodes in the cluster are not available for scheduling. You can now customize the node that is used to schedule the result server workload by configuring the nodeSelector and tolerations attributes of the ScanSetting object. These attributes are used to place the ResultServer pod, the pod that is used to mount a PV storage volume and store the raw Asset Reporting Format (ARF) results. Previously, the nodeSelector and the tolerations parameters defaulted to selecting one of the control plane nodes and tolerating the node-role.kubernetes.io/master taint . This did not work in environments where control plane nodes are not permitted to mount PVs. This feature provides a way for you to select the node and tolerate a different taint in those environments. The Compliance Operator can now remediate KubeletConfig objects. A comment containing an error message is now added to help content developers differentiate between objects that do not exist in the cluster compared to objects that cannot be fetched. Rule objects now contain two new attributes, checkType and description . These attributes allow you to determine if the rule pertains to a node check or platform check, and also allow you to review what the rule does. This enhancement removes the requirement that you have to extend an existing profile to create a tailored profile. This means the extends field in the TailoredProfile CRD is no longer mandatory. You can now select a list of rule objects to create a tailored profile. Note that you must select whether your profile applies to nodes or the platform by setting the compliance.openshift.io/product-type: annotation or by setting the -node suffix for the TailoredProfile CR. In this release, the Compliance Operator is now able to schedule scans on all nodes irrespective of their taints. Previously, the scan pods would only tolerated the node-role.kubernetes.io/master taint , meaning that they would either ran on nodes with no taints or only on nodes with the node-role.kubernetes.io/master taint. In deployments that use custom taints for their nodes, this resulted in the scans not being scheduled on those nodes. Now, the scan pods tolerate all node taints. In this release, the Compliance Operator supports the following North American Electric Reliability Corporation (NERC) security profiles: ocp4-nerc-cip ocp4-nerc-cip-node rhcos4-nerc-cip In this release, the Compliance Operator supports the NIST 800-53 Moderate-Impact Baseline for the Red Hat OpenShift - Node level, ocp4-moderate-node, security profile. 5.2.21.2. Templating and variable use In this release, the remediation template now allows multi-value variables. With this update, the Compliance Operator can change remediations based on variables that are set in the compliance profile. This is useful for remediations that include deployment-specific values such as time outs, NTP server host names, or similar. Additionally, the ComplianceCheckResult objects now use the label compliance.openshift.io/check-has-value that lists the variables a check has used. 5.2.21.3. Bug fixes Previously, while performing a scan, an unexpected termination occurred in one of the scanner containers of the pods. In this release, the Compliance Operator uses the latest OpenSCAP version 1.3.5 to avoid a crash. Previously, using autoReplyRemediations to apply remediations triggered an update of the cluster nodes. This was disruptive if some of the remediations did not include all of the required input variables. Now, if a remediation is missing one or more required input variables, it is assigned a state of NeedsReview . If one or more remediations are in a NeedsReview state, the machine config pool remains paused, and the remediations are not applied until all of the required variables are set. This helps minimize disruption to the nodes. The RBAC Role and Role Binding used for Prometheus metrics are changed to 'ClusterRole' and 'ClusterRoleBinding' to ensure that monitoring works without customization. Previously, if an error occurred while parsing a profile, rules or variables objects were removed and deleted from the profile. Now, if an error occurs during parsing, the profileparser annotates the object with a temporary annotation that prevents the object from being deleted until after parsing completes. ( BZ#1988259 ) Previously, an error occurred if titles or descriptions were missing from a tailored profile. Because the XCCDF standard requires titles and descriptions for tailored profiles, titles and descriptions are now required to be set in TailoredProfile CRs. Previously, when using tailored profiles, TailoredProfile variable values were allowed to be set using only a specific selection set. This restriction is now removed, and TailoredProfile variables can be set to any value. 5.2.22. Release Notes for Compliance Operator 0.1.39 The following advisory is available for the OpenShift Compliance Operator 0.1.39: RHBA-2021:3214 - OpenShift Compliance Operator bug fix and enhancement update 5.2.22.1. New features and enhancements Previously, the Compliance Operator was unable to parse Payment Card Industry Data Security Standard (PCI DSS) references. Now, the Operator can parse compliance content that is provided with PCI DSS profiles. Previously, the Compliance Operator was unable to execute rules for AU-5 control in the moderate profile. Now, permission is added to the Operator so that it can read Prometheusrules.monitoring.coreos.com objects and run the rules that cover AU-5 control in the moderate profile. 5.2.23. Additional resources Understanding the Compliance Operator 5.3. Compliance Operator support 5.3.1. Compliance Operator lifecycle The Compliance Operator is a "Rolling Stream" Operator, meaning updates are available asynchronously of OpenShift Container Platform releases. For more information, see OpenShift Operator Life Cycles on the Red Hat Customer Portal. 5.3.2. Getting support If you experience difficulty with a procedure described in this documentation, or with OpenShift Container Platform in general, visit the Red Hat Customer Portal . From the Customer Portal, you can: Search or browse through the Red Hat Knowledgebase of articles and solutions relating to Red Hat products. Submit a support case to Red Hat Support. Access other product documentation. To identify issues with your cluster, you can use Insights in OpenShift Cluster Manager . Insights provides details about issues and, if available, information on how to solve a problem. If you have a suggestion for improving this documentation or have found an error, submit a Jira issue for the most relevant documentation component. Please provide specific details, such as the section name and OpenShift Container Platform version. 5.3.3. Using the must-gather tool for the Compliance Operator Starting in Compliance Operator v1.6.0, you can collect data about the Compliance Operator resources by running the must-gather command with the Compliance Operator image. Note Consider using the must-gather tool when opening support cases or filing bug reports, as it provides additional details about the Operator configuration and logs. Procedure Run the following command to collect data about the Compliance Operator: USD oc adm must-gather --image=USD(oc get csv compliance-operator.v1.6.0 -o=jsonpath='{.spec.relatedImages[?(@.name=="must-gather")].image}') 5.3.4. Additional resources About the must-gather tool Product Compliance 5.4. Compliance Operator concepts 5.4.1. Understanding the Compliance Operator The Compliance Operator lets OpenShift Container Platform administrators describe the required compliance state of a cluster and provides them with an overview of gaps and ways to remediate them. The Compliance Operator assesses compliance of both the Kubernetes API resources of OpenShift Container Platform, as well as the nodes running the cluster. The Compliance Operator uses OpenSCAP, a NIST-certified tool, to scan and enforce security policies provided by the content. Important The Compliance Operator is available for Red Hat Enterprise Linux CoreOS (RHCOS) deployments only. 5.4.1.1. Compliance Operator profiles There are several profiles available as part of the Compliance Operator installation. You can use the oc get command to view available profiles, profile details, and specific rules. View the available profiles: USD oc get profile.compliance -n openshift-compliance Example output NAME AGE VERSION ocp4-cis 3h49m 1.5.0 ocp4-cis-1-4 3h49m 1.4.0 ocp4-cis-1-5 3h49m 1.5.0 ocp4-cis-node 3h49m 1.5.0 ocp4-cis-node-1-4 3h49m 1.4.0 ocp4-cis-node-1-5 3h49m 1.5.0 ocp4-e8 3h49m ocp4-high 3h49m Revision 4 ocp4-high-node 3h49m Revision 4 ocp4-high-node-rev-4 3h49m Revision 4 ocp4-high-rev-4 3h49m Revision 4 ocp4-moderate 3h49m Revision 4 ocp4-moderate-node 3h49m Revision 4 ocp4-moderate-node-rev-4 3h49m Revision 4 ocp4-moderate-rev-4 3h49m Revision 4 ocp4-nerc-cip 3h49m ocp4-nerc-cip-node 3h49m ocp4-pci-dss 3h49m 3.2.1 ocp4-pci-dss-3-2 3h49m 3.2.1 ocp4-pci-dss-4-0 3h49m 4.0.0 ocp4-pci-dss-node 3h49m 3.2.1 ocp4-pci-dss-node-3-2 3h49m 3.2.1 ocp4-pci-dss-node-4-0 3h49m 4.0.0 ocp4-stig 3h49m V2R1 ocp4-stig-node 3h49m V2R1 ocp4-stig-node-v1r1 3h49m V1R1 ocp4-stig-node-v2r1 3h49m V2R1 ocp4-stig-v1r1 3h49m V1R1 ocp4-stig-v2r1 3h49m V2R1 rhcos4-e8 3h49m rhcos4-high 3h49m Revision 4 rhcos4-high-rev-4 3h49m Revision 4 rhcos4-moderate 3h49m Revision 4 rhcos4-moderate-rev-4 3h49m Revision 4 rhcos4-nerc-cip 3h49m rhcos4-stig 3h49m V2R1 rhcos4-stig-v1r1 3h49m V1R1 rhcos4-stig-v2r1 3h49m V2R1 These profiles represent different compliance benchmarks. Each profile has the product name that it applies to added as a prefix to the profile's name. ocp4-e8 applies the Essential 8 benchmark to the OpenShift Container Platform product, while rhcos4-e8 applies the Essential 8 benchmark to the Red Hat Enterprise Linux CoreOS (RHCOS) product. Run the following command to view the details of the rhcos4-e8 profile: USD oc get -n openshift-compliance -oyaml profiles.compliance rhcos4-e8 Example 5.1. Example output apiVersion: compliance.openshift.io/v1alpha1 description: 'This profile contains configuration checks for Red Hat Enterprise Linux CoreOS that align to the Australian Cyber Security Centre (ACSC) Essential Eight. A copy of the Essential Eight in Linux Environments guide can be found at the ACSC website: https://www.cyber.gov.au/acsc/view-all-content/publications/hardening-linux-workstations-and-servers' id: xccdf_org.ssgproject.content_profile_e8 kind: Profile metadata: annotations: compliance.openshift.io/image-digest: pb-rhcos4hrdkm compliance.openshift.io/product: redhat_enterprise_linux_coreos_4 compliance.openshift.io/product-type: Node creationTimestamp: "2022-10-19T12:06:49Z" generation: 1 labels: compliance.openshift.io/profile-bundle: rhcos4 name: rhcos4-e8 namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ProfileBundle name: rhcos4 uid: 22350850-af4a-4f5c-9a42-5e7b68b82d7d resourceVersion: "43699" uid: 86353f70-28f7-40b4-bf0e-6289ec33675b rules: - rhcos4-accounts-no-uid-except-zero - rhcos4-audit-rules-dac-modification-chmod - rhcos4-audit-rules-dac-modification-chown - rhcos4-audit-rules-execution-chcon - rhcos4-audit-rules-execution-restorecon - rhcos4-audit-rules-execution-semanage - rhcos4-audit-rules-execution-setfiles - rhcos4-audit-rules-execution-setsebool - rhcos4-audit-rules-execution-seunshare - rhcos4-audit-rules-kernel-module-loading-delete - rhcos4-audit-rules-kernel-module-loading-finit - rhcos4-audit-rules-kernel-module-loading-init - rhcos4-audit-rules-login-events - rhcos4-audit-rules-login-events-faillock - rhcos4-audit-rules-login-events-lastlog - rhcos4-audit-rules-login-events-tallylog - rhcos4-audit-rules-networkconfig-modification - rhcos4-audit-rules-sysadmin-actions - rhcos4-audit-rules-time-adjtimex - rhcos4-audit-rules-time-clock-settime - rhcos4-audit-rules-time-settimeofday - rhcos4-audit-rules-time-stime - rhcos4-audit-rules-time-watch-localtime - rhcos4-audit-rules-usergroup-modification - rhcos4-auditd-data-retention-flush - rhcos4-auditd-freq - rhcos4-auditd-local-events - rhcos4-auditd-log-format - rhcos4-auditd-name-format - rhcos4-auditd-write-logs - rhcos4-configure-crypto-policy - rhcos4-configure-ssh-crypto-policy - rhcos4-no-empty-passwords - rhcos4-selinux-policytype - rhcos4-selinux-state - rhcos4-service-auditd-enabled - rhcos4-sshd-disable-empty-passwords - rhcos4-sshd-disable-gssapi-auth - rhcos4-sshd-disable-rhosts - rhcos4-sshd-disable-root-login - rhcos4-sshd-disable-user-known-hosts - rhcos4-sshd-do-not-permit-user-env - rhcos4-sshd-enable-strictmodes - rhcos4-sshd-print-last-log - rhcos4-sshd-set-loglevel-info - rhcos4-sysctl-kernel-dmesg-restrict - rhcos4-sysctl-kernel-kptr-restrict - rhcos4-sysctl-kernel-randomize-va-space - rhcos4-sysctl-kernel-unprivileged-bpf-disabled - rhcos4-sysctl-kernel-yama-ptrace-scope - rhcos4-sysctl-net-core-bpf-jit-harden title: Australian Cyber Security Centre (ACSC) Essential Eight Run the following command to view the details of the rhcos4-audit-rules-login-events rule: USD oc get -n openshift-compliance -oyaml rules rhcos4-audit-rules-login-events Example 5.2. Example output apiVersion: compliance.openshift.io/v1alpha1 checkType: Node description: |- The audit system already collects login information for all users and root. If the auditd daemon is configured to use the augenrules program to read audit rules during daemon startup (the default), add the following lines to a file with suffix.rules in the directory /etc/audit/rules.d in order to watch for attempted manual edits of files involved in storing logon events: -w /var/log/tallylog -p wa -k logins -w /var/run/faillock -p wa -k logins -w /var/log/lastlog -p wa -k logins If the auditd daemon is configured to use the auditctl utility to read audit rules during daemon startup, add the following lines to /etc/audit/audit.rules file in order to watch for unattempted manual edits of files involved in storing logon events: -w /var/log/tallylog -p wa -k logins -w /var/run/faillock -p wa -k logins -w /var/log/lastlog -p wa -k logins id: xccdf_org.ssgproject.content_rule_audit_rules_login_events kind: Rule metadata: annotations: compliance.openshift.io/image-digest: pb-rhcos4hrdkm compliance.openshift.io/rule: audit-rules-login-events control.compliance.openshift.io/NIST-800-53: AU-2(d);AU-12(c);AC-6(9);CM-6(a) control.compliance.openshift.io/PCI-DSS: Req-10.2.3 policies.open-cluster-management.io/controls: AU-2(d),AU-12(c),AC-6(9),CM-6(a),Req-10.2.3 policies.open-cluster-management.io/standards: NIST-800-53,PCI-DSS creationTimestamp: "2022-10-19T12:07:08Z" generation: 1 labels: compliance.openshift.io/profile-bundle: rhcos4 name: rhcos4-audit-rules-login-events namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ProfileBundle name: rhcos4 uid: 22350850-af4a-4f5c-9a42-5e7b68b82d7d resourceVersion: "44819" uid: 75872f1f-3c93-40ca-a69d-44e5438824a4 rationale: Manual editing of these files may indicate nefarious activity, such as an attacker attempting to remove evidence of an intrusion. severity: medium title: Record Attempts to Alter Logon and Logout Events warning: Manual editing of these files may indicate nefarious activity, such as an attacker attempting to remove evidence of an intrusion. 5.4.1.1.1. Compliance Operator profile types There are two types of compliance profiles available: Platform and Node. Platform Platform scans target your OpenShift Container Platform cluster. Node Node scans target the nodes of the cluster. Important For compliance profiles that have Node and Platform applications, such as pci-dss compliance profiles, you must run both in your OpenShift Container Platform environment. 5.4.1.2. Additional resources Supported compliance profiles 5.4.2. Understanding the Custom Resource Definitions The Compliance Operator in the OpenShift Container Platform provides you with several Custom Resource Definitions (CRDs) to accomplish the compliance scans. To run a compliance scan, it leverages the predefined security policies, which are derived from the ComplianceAsCode community project. The Compliance Operator converts these security policies into CRDs, which you can use to run compliance scans and get remediations for the issues found. 5.4.2.1. CRDs workflow The CRD provides you the following workflow to complete the compliance scans: Define your compliance scan requirements Configure the compliance scan settings Process compliance requirements with compliance scans settings Monitor the compliance scans Check the compliance scan results 5.4.2.2. Defining the compliance scan requirements By default, the Compliance Operator CRDs include ProfileBundle and Profile objects, in which you can define and set the rules for your compliance scan requirements. You can also customize the default profiles by using a TailoredProfile object. 5.4.2.2.1. ProfileBundle object When you install the Compliance Operator, it includes ready-to-run ProfileBundle objects. The Compliance Operator parses the ProfileBundle object and creates a Profile object for each profile in the bundle. It also parses Rule and Variable objects, which are used by the Profile object. Example ProfileBundle object apiVersion: compliance.openshift.io/v1alpha1 kind: ProfileBundle name: <profile bundle name> namespace: openshift-compliance status: dataStreamStatus: VALID 1 1 Indicates whether the Compliance Operator was able to parse the content files. Note When the contentFile fails, an errorMessage attribute appears, which provides details of the error that occurred. Troubleshooting When you roll back to a known content image from an invalid image, the ProfileBundle object stops responding and displays PENDING state. As a workaround, you can move to a different image than the one. Alternatively, you can delete and re-create the ProfileBundle object to return to the working state. 5.4.2.2.2. Profile object The Profile object defines the rules and variables that can be evaluated for a certain compliance standard. It contains parsed out details about an OpenSCAP profile, such as its XCCDF identifier and profile checks for a Node or Platform type. You can either directly use the Profile object or further customize it using a TailorProfile object. Note You cannot create or modify the Profile object manually because it is derived from a single ProfileBundle object. Typically, a single ProfileBundle object can include several Profile objects. Example Profile object apiVersion: compliance.openshift.io/v1alpha1 description: <description of the profile> id: xccdf_org.ssgproject.content_profile_moderate 1 kind: Profile metadata: annotations: compliance.openshift.io/product: <product name> compliance.openshift.io/product-type: Node 2 creationTimestamp: "YYYY-MM-DDTMM:HH:SSZ" generation: 1 labels: compliance.openshift.io/profile-bundle: <profile bundle name> name: rhcos4-moderate namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ProfileBundle name: <profile bundle name> uid: <uid string> resourceVersion: "<version number>" selfLink: /apis/compliance.openshift.io/v1alpha1/namespaces/openshift-compliance/profiles/rhcos4-moderate uid: <uid string> rules: 3 - rhcos4-account-disable-post-pw-expiration - rhcos4-accounts-no-uid-except-zero - rhcos4-audit-rules-dac-modification-chmod - rhcos4-audit-rules-dac-modification-chown title: <title of the profile> 1 Specify the XCCDF name of the profile. Use this identifier when you define a ComplianceScan object as the value of the profile attribute of the scan. 2 Specify either a Node or Platform . Node profiles scan the cluster nodes and platform profiles scan the Kubernetes platform. 3 Specify the list of rules for the profile. Each rule corresponds to a single check. 5.4.2.2.3. Rule object The Rule object, which forms the profiles, are also exposed as objects. Use the Rule object to define your compliance check requirements and specify how it could be fixed. Example Rule object apiVersion: compliance.openshift.io/v1alpha1 checkType: Platform 1 description: <description of the rule> id: xccdf_org.ssgproject.content_rule_configure_network_policies_namespaces 2 instructions: <manual instructions for the scan> kind: Rule metadata: annotations: compliance.openshift.io/rule: configure-network-policies-namespaces control.compliance.openshift.io/CIS-OCP: 5.3.2 control.compliance.openshift.io/NERC-CIP: CIP-003-3 R4;CIP-003-3 R4.2;CIP-003-3 R5;CIP-003-3 R6;CIP-004-3 R2.2.4;CIP-004-3 R3;CIP-007-3 R2;CIP-007-3 R2.1;CIP-007-3 R2.2;CIP-007-3 R2.3;CIP-007-3 R5.1;CIP-007-3 R6.1 control.compliance.openshift.io/NIST-800-53: AC-4;AC-4(21);CA-3(5);CM-6;CM-6(1);CM-7;CM-7(1);SC-7;SC-7(3);SC-7(5);SC-7(8);SC-7(12);SC-7(13);SC-7(18) labels: compliance.openshift.io/profile-bundle: ocp4 name: ocp4-configure-network-policies-namespaces namespace: openshift-compliance rationale: <description of why this rule is checked> severity: high 3 title: <summary of the rule> 1 Specify the type of check this rule executes. Node profiles scan the cluster nodes and Platform profiles scan the Kubernetes platform. An empty value indicates there is no automated check. 2 Specify the XCCDF name of the rule, which is parsed directly from the datastream. 3 Specify the severity of the rule when it fails. Note The Rule object gets an appropriate label for an easy identification of the associated ProfileBundle object. The ProfileBundle also gets specified in the OwnerReferences of this object. 5.4.2.2.4. TailoredProfile object Use the TailoredProfile object to modify the default Profile object based on your organization requirements. You can enable or disable rules, set variable values, and provide justification for the customization. After validation, the TailoredProfile object creates a ConfigMap , which can be referenced by a ComplianceScan object. Tip You can use the TailoredProfile object by referencing it in a ScanSettingBinding object. For more information about ScanSettingBinding , see ScanSettingBinding object. Example TailoredProfile object apiVersion: compliance.openshift.io/v1alpha1 kind: TailoredProfile metadata: name: rhcos4-with-usb spec: extends: rhcos4-moderate 1 title: <title of the tailored profile> disableRules: - name: <name of a rule object to be disabled> rationale: <description of why this rule is checked> status: id: xccdf_compliance.openshift.io_profile_rhcos4-with-usb 2 outputRef: name: rhcos4-with-usb-tp 3 namespace: openshift-compliance state: READY 4 1 This is optional. Name of the Profile object upon which the TailoredProfile is built. If no value is set, a new profile is created from the enableRules list. 2 Specifies the XCCDF name of the tailored profile. 3 Specifies the ConfigMap name, which can be used as the value of the tailoringConfigMap.name attribute of a ComplianceScan . 4 Shows the state of the object such as READY , PENDING , and FAILURE . If the state of the object is ERROR , then the attribute status.errorMessage provides the reason for the failure. With the TailoredProfile object, it is possible to create a new Profile object using the TailoredProfile construct. To create a new Profile , set the following configuration parameters : an appropriate title extends value must be empty scan type annotation on the TailoredProfile object: compliance.openshift.io/product-type: Platform/Node Note If you have not set the product-type annotation, the Compliance Operator defaults to Platform scan type. Adding the -node suffix to the name of the TailoredProfile object results in node scan type. 5.4.2.3. Configuring the compliance scan settings After you have defined the requirements of the compliance scan, you can configure it by specifying the type of the scan, occurrence of the scan, and location of the scan. To do so, Compliance Operator provides you with a ScanSetting object. 5.4.2.3.1. ScanSetting object Use the ScanSetting object to define and reuse the operational policies to run your scans. By default, the Compliance Operator creates the following ScanSetting objects: default - it runs a scan every day at 1 AM on both master and worker nodes using a 1Gi Persistent Volume (PV) and keeps the last three results. Remediation is neither applied nor updated automatically. default-auto-apply - it runs a scan every day at 1AM on both control plane and worker nodes using a 1Gi Persistent Volume (PV) and keeps the last three results. Both autoApplyRemediations and autoUpdateRemediations are set to true. Example ScanSetting object apiVersion: compliance.openshift.io/v1alpha1 autoApplyRemediations: true 1 autoUpdateRemediations: true 2 kind: ScanSetting maxRetryOnTimeout: 3 metadata: creationTimestamp: "2022-10-18T20:21:00Z" generation: 1 name: default-auto-apply namespace: openshift-compliance resourceVersion: "38840" uid: 8cb0967d-05e0-4d7a-ac1c-08a7f7e89e84 rawResultStorage: nodeSelector: node-role.kubernetes.io/master: "" pvAccessModes: - ReadWriteOnce rotation: 3 3 size: 1Gi 4 tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists roles: 5 - master - worker scanTolerations: - operator: Exists schedule: 0 1 * * * 6 showNotApplicable: false strictNodeScan: true timeout: 30m 1 Set to true to enable auto remediations. Set to false to disable auto remediations. 2 Set to true to enable auto remediations for content updates. Set to false to disable auto remediations for content updates. 3 Specify the number of stored scans in the raw result format. The default value is 3 . As the older results get rotated, the administrator must store the results elsewhere before the rotation happens. 4 Specify the storage size that should be created for the scan to store the raw results. The default value is 1Gi 6 Specify how often the scan should be run in cron format. Note To disable the rotation policy, set the value to 0 . 5 Specify the node-role.kubernetes.io label value to schedule the scan for Node type. This value has to match the name of a MachineConfigPool . 5.4.2.4. Processing the compliance scan requirements with compliance scans settings When you have defined the compliance scan requirements and configured the settings to run the scans, then the Compliance Operator processes it using the ScanSettingBinding object. 5.4.2.4.1. ScanSettingBinding object Use the ScanSettingBinding object to specify your compliance requirements with reference to the Profile or TailoredProfile object. It is then linked to a ScanSetting object, which provides the operational constraints for the scan. Then the Compliance Operator generates the ComplianceSuite object based on the ScanSetting and ScanSettingBinding objects. Example ScanSettingBinding object apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: <name of the scan> profiles: 1 # Node checks - name: rhcos4-with-usb kind: TailoredProfile apiGroup: compliance.openshift.io/v1alpha1 # Cluster checks - name: ocp4-moderate kind: Profile apiGroup: compliance.openshift.io/v1alpha1 settingsRef: 2 name: my-companys-constraints kind: ScanSetting apiGroup: compliance.openshift.io/v1alpha1 1 Specify the details of Profile or TailoredProfile object to scan your environment. 2 Specify the operational constraints, such as schedule and storage size. The creation of ScanSetting and ScanSettingBinding objects results in the compliance suite. To get the list of compliance suite, run the following command: USD oc get compliancesuites Important If you delete ScanSettingBinding , then compliance suite also is deleted. 5.4.2.5. Tracking the compliance scans After the creation of compliance suite, you can monitor the status of the deployed scans using the ComplianceSuite object. 5.4.2.5.1. ComplianceSuite object The ComplianceSuite object helps you keep track of the state of the scans. It contains the raw settings to create scans and the overall result. For Node type scans, you should map the scan to the MachineConfigPool , since it contains the remediations for any issues. If you specify a label, ensure it directly applies to a pool. Example ComplianceSuite object apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceSuite metadata: name: <name of the scan> spec: autoApplyRemediations: false 1 schedule: "0 1 * * *" 2 scans: 3 - name: workers-scan scanType: Node profile: xccdf_org.ssgproject.content_profile_moderate content: ssg-rhcos4-ds.xml contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:45dc... rule: "xccdf_org.ssgproject.content_rule_no_netrc_files" nodeSelector: node-role.kubernetes.io/worker: "" status: Phase: DONE 4 Result: NON-COMPLIANT 5 scanStatuses: - name: workers-scan phase: DONE result: NON-COMPLIANT 1 Set to true to enable auto remediations. Set to false to disable auto remediations. 2 Specify how often the scan should be run in cron format. 3 Specify a list of scan specifications to run in the cluster. 4 Indicates the progress of the scans. 5 Indicates the overall verdict of the suite. The suite in the background creates the ComplianceScan object based on the scans parameter. You can programmatically fetch the ComplianceSuites events. To get the events for the suite, run the following command: USD oc get events --field-selector involvedObject.kind=ComplianceSuite,involvedObject.name=<name of the suite> Important You might create errors when you manually define the ComplianceSuite , since it contains the XCCDF attributes. 5.4.2.5.2. Advanced ComplianceScan Object The Compliance Operator includes options for advanced users for debugging or integrating with existing tooling. While it is recommended that you not create a ComplianceScan object directly, you can instead manage it using a ComplianceSuite object. Example Advanced ComplianceScan object apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceScan metadata: name: <name of the scan> spec: scanType: Node 1 profile: xccdf_org.ssgproject.content_profile_moderate 2 content: ssg-ocp4-ds.xml contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:45dc... 3 rule: "xccdf_org.ssgproject.content_rule_no_netrc_files" 4 nodeSelector: 5 node-role.kubernetes.io/worker: "" status: phase: DONE 6 result: NON-COMPLIANT 7 1 Specify either Node or Platform . Node profiles scan the cluster nodes and platform profiles scan the Kubernetes platform. 2 Specify the XCCDF identifier of the profile that you want to run. 3 Specify the container image that encapsulates the profile files. 4 It is optional. Specify the scan to run a single rule. This rule has to be identified with the XCCDF ID, and has to belong to the specified profile. Note If you skip the rule parameter, then scan runs for all the available rules of the specified profile. 5 If you are on the OpenShift Container Platform and wants to generate a remediation, then nodeSelector label has to match the MachineConfigPool label. Note If you do not specify nodeSelector parameter or match the MachineConfig label, scan will still run, but it will not create remediation. 6 Indicates the current phase of the scan. 7 Indicates the verdict of the scan. Important If you delete a ComplianceSuite object, then all the associated scans get deleted. When the scan is complete, it generates the result as Custom Resources of the ComplianceCheckResult object. However, the raw results are available in ARF format. These results are stored in a Persistent Volume (PV), which has a Persistent Volume Claim (PVC) associated with the name of the scan. You can programmatically fetch the ComplianceScans events. To generate events for the suite, run the following command: oc get events --field-selector involvedObject.kind=ComplianceScan,involvedObject.name=<name of the suite> 5.4.2.6. Viewing the compliance results When the compliance suite reaches the DONE phase, you can view the scan results and possible remediations. 5.4.2.6.1. ComplianceCheckResult object When you run a scan with a specific profile, several rules in the profiles are verified. For each of these rules, a ComplianceCheckResult object is created, which provides the state of the cluster for a specific rule. Example ComplianceCheckResult object apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceCheckResult metadata: labels: compliance.openshift.io/check-severity: medium compliance.openshift.io/check-status: FAIL compliance.openshift.io/suite: example-compliancesuite compliance.openshift.io/scan-name: workers-scan name: workers-scan-no-direct-root-logins namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ComplianceScan name: workers-scan description: <description of scan check> instructions: <manual instructions for the scan> id: xccdf_org.ssgproject.content_rule_no_direct_root_logins severity: medium 1 status: FAIL 2 1 Describes the severity of the scan check. 2 Describes the result of the check. The possible values are: PASS: check was successful. FAIL: check was unsuccessful. INFO: check was successful and found something not severe enough to be considered an error. MANUAL: check cannot automatically assess the status and manual check is required. INCONSISTENT: different nodes report different results. ERROR: check run successfully, but could not complete. NOTAPPLICABLE: check did not run as it is not applicable. To get all the check results from a suite, run the following command: oc get compliancecheckresults \ -l compliance.openshift.io/suite=workers-compliancesuite 5.4.2.6.2. ComplianceRemediation object For a specific check you can have a datastream specified fix. However, if a Kubernetes fix is available, then the Compliance Operator creates a ComplianceRemediation object. Example ComplianceRemediation object apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceRemediation metadata: labels: compliance.openshift.io/suite: example-compliancesuite compliance.openshift.io/scan-name: workers-scan machineconfiguration.openshift.io/role: worker name: workers-scan-disable-users-coredumps namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ComplianceCheckResult name: workers-scan-disable-users-coredumps uid: <UID> spec: apply: false 1 object: current: 2 apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig spec: config: ignition: version: 2.2.0 storage: files: - contents: source: data:,%2A%20%20%20%20%20hard%20%20%20core%20%20%20%200 filesystem: root mode: 420 path: /etc/security/limits.d/75-disable_users_coredumps.conf outdated: {} 3 1 true indicates the remediation was applied. false indicates the remediation was not applied. 2 Includes the definition of the remediation. 3 Indicates remediation that was previously parsed from an earlier version of the content. The Compliance Operator still retains the outdated objects to give the administrator a chance to review the new remediations before applying them. To get all the remediations from a suite, run the following command: oc get complianceremediations \ -l compliance.openshift.io/suite=workers-compliancesuite To list all failing checks that can be remediated automatically, run the following command: oc get compliancecheckresults \ -l 'compliance.openshift.io/check-status in (FAIL),compliance.openshift.io/automated-remediation' To list all failing checks that can be remediated manually, run the following command: oc get compliancecheckresults \ -l 'compliance.openshift.io/check-status in (FAIL),!compliance.openshift.io/automated-remediation' 5.5. Compliance Operator management 5.5.1. Installing the Compliance Operator Before you can use the Compliance Operator, you must ensure it is deployed in the cluster. Important The Compliance Operator might report incorrect results on managed platforms, such as OpenShift Dedicated, Red Hat OpenShift Service on AWS Classic, and Microsoft Azure Red Hat OpenShift. For more information, see the Knowledgebase article Compliance Operator reports incorrect results on Managed Services . Important Before deploying the Compliance Operator, you are required to define persistent storage in your cluster to store the raw results output. For more information, see Persistant storage overview and Managing the default storage class . 5.5.1.1. Installing the Compliance Operator through the web console Prerequisites You must have admin privileges. You must have a StorageClass resource configured. Procedure In the OpenShift Container Platform web console, navigate to Operators OperatorHub . Search for the Compliance Operator, then click Install . Keep the default selection of Installation mode and namespace to ensure that the Operator will be installed to the openshift-compliance namespace. Click Install . Verification To confirm that the installation is successful: Navigate to the Operators Installed Operators page. Check that the Compliance Operator is installed in the openshift-compliance namespace and its status is Succeeded . If the Operator is not installed successfully: Navigate to the Operators Installed Operators page and inspect the Status column for any errors or failures. Navigate to the Workloads Pods page and check the logs in any pods in the openshift-compliance project that are reporting issues. Important If the restricted Security Context Constraints (SCC) have been modified to contain the system:authenticated group or has added requiredDropCapabilities , the Compliance Operator may not function properly due to permissions issues. You can create a custom SCC for the Compliance Operator scanner pod service account. For more information, see Creating a custom SCC for the Compliance Operator . 5.5.1.2. Installing the Compliance Operator using the CLI Prerequisites You must have admin privileges. You must have a StorageClass resource configured. Procedure Define a Namespace object: Example namespace-object.yaml apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: "true" pod-security.kubernetes.io/enforce: privileged 1 name: openshift-compliance 1 In OpenShift Container Platform 4.16, the pod security label must be set to privileged at the namespace level. Create the Namespace object: USD oc create -f namespace-object.yaml Define an OperatorGroup object: Example operator-group-object.yaml apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: compliance-operator namespace: openshift-compliance spec: targetNamespaces: - openshift-compliance Create the OperatorGroup object: USD oc create -f operator-group-object.yaml Define a Subscription object: Example subscription-object.yaml apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: compliance-operator-sub namespace: openshift-compliance spec: channel: "stable" installPlanApproval: Automatic name: compliance-operator source: redhat-operators sourceNamespace: openshift-marketplace Create the Subscription object: USD oc create -f subscription-object.yaml Note If you are setting the global scheduler feature and enable defaultNodeSelector , you must create the namespace manually and update the annotations of the openshift-compliance namespace, or the namespace where the Compliance Operator was installed, with openshift.io/node-selector: "" . This removes the default node selector and prevents deployment failures. Verification Verify the installation succeeded by inspecting the CSV file: USD oc get csv -n openshift-compliance Verify that the Compliance Operator is up and running: USD oc get deploy -n openshift-compliance 5.5.1.3. Installing the Compliance Operator on ROSA hosted control planes (HCP) As of the Compliance Operator 1.5.0 release, the Operator is tested against Red Hat OpenShift Service on AWS using Hosted control planes. Red Hat OpenShift Service on AWS Hosted control planes clusters have restricted access to the control plane, which is managed by Red Hat. By default, the Compliance Operator will schedule to nodes within the master node pool, which is not available in Red Hat OpenShift Service on AWS Hosted control planes installations. This requires you to configure the Subscription object in a way that allows the Operator to schedule on available node pools. This step is necessary for a successful installation on Red Hat OpenShift Service on AWS Hosted control planes clusters. Prerequisites You must have admin privileges. You must have a StorageClass resource configured. Procedure Define a Namespace object: Example namespace-object.yaml file apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: "true" pod-security.kubernetes.io/enforce: privileged 1 name: openshift-compliance 1 In OpenShift Container Platform 4.16, the pod security label must be set to privileged at the namespace level. Create the Namespace object by running the following command: USD oc create -f namespace-object.yaml Define an OperatorGroup object: Example operator-group-object.yaml file apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: compliance-operator namespace: openshift-compliance spec: targetNamespaces: - openshift-compliance Create the OperatorGroup object by running the following command: USD oc create -f operator-group-object.yaml Define a Subscription object: Example subscription-object.yaml file apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: compliance-operator-sub namespace: openshift-compliance spec: channel: "stable" installPlanApproval: Automatic name: compliance-operator source: redhat-operators sourceNamespace: openshift-marketplace config: nodeSelector: node-role.kubernetes.io/worker: "" 1 1 Update the Operator deployment to deploy on worker nodes. Create the Subscription object by running the following command: USD oc create -f subscription-object.yaml Verification Verify that the installation succeeded by running the following command to inspect the cluster service version (CSV) file: USD oc get csv -n openshift-compliance Verify that the Compliance Operator is up and running by using the following command: USD oc get deploy -n openshift-compliance Important If the restricted Security Context Constraints (SCC) have been modified to contain the system:authenticated group or has added requiredDropCapabilities , the Compliance Operator may not function properly due to permissions issues. You can create a custom SCC for the Compliance Operator scanner pod service account. For more information, see Creating a custom SCC for the Compliance Operator . 5.5.1.4. Installing the Compliance Operator on Hypershift hosted control planes The Compliance Operator can be installed in hosted control planes using the OperatorHub by creating a Subscription file. Important Hosted control planes is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Prerequisites You must have admin privileges. Procedure Define a Namespace object similar to the following: Example namespace-object.yaml apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: "true" pod-security.kubernetes.io/enforce: privileged 1 name: openshift-compliance 1 In OpenShift Container Platform 4.16, the pod security label must be set to privileged at the namespace level. Create the Namespace object by running the following command: USD oc create -f namespace-object.yaml Define an OperatorGroup object: Example operator-group-object.yaml apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: compliance-operator namespace: openshift-compliance spec: targetNamespaces: - openshift-compliance Create the OperatorGroup object by running the following command: USD oc create -f operator-group-object.yaml Define a Subscription object: Example subscription-object.yaml apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: compliance-operator-sub namespace: openshift-compliance spec: channel: "stable" installPlanApproval: Automatic name: compliance-operator source: redhat-operators sourceNamespace: openshift-marketplace config: nodeSelector: node-role.kubernetes.io/worker: "" env: - name: PLATFORM value: "HyperShift" Create the Subscription object by running the following command: USD oc create -f subscription-object.yaml Verification Verify the installation succeeded by inspecting the CSV file by running the following command: USD oc get csv -n openshift-compliance Verify that the Compliance Operator is up and running by running the following command: USD oc get deploy -n openshift-compliance Additional resources Hosted control planes overview 5.5.1.5. Additional resources The Compliance Operator is supported in a restricted network environment. For more information, see Using Operator Lifecycle Manager on restricted networks . 5.5.2. Updating the Compliance Operator As a cluster administrator, you can update the Compliance Operator on your OpenShift Container Platform cluster. Important Updating your OpenShift Container Platform cluster to version 4.14 might cause the Compliance Operator to not work as expected. This is due to an ongoing known issue. For more information, see OCPBUGS-18025 . 5.5.2.1. Preparing for an Operator update The subscription of an installed Operator specifies an update channel that tracks and receives updates for the Operator. You can change the update channel to start tracking and receiving updates from a newer channel. The names of update channels in a subscription can differ between Operators, but the naming scheme typically follows a common convention within a given Operator. For example, channel names might follow a minor release update stream for the application provided by the Operator ( 1.2 , 1.3 ) or a release frequency ( stable , fast ). Note You cannot change installed Operators to a channel that is older than the current channel. Red Hat Customer Portal Labs include the following application that helps administrators prepare to update their Operators: Red Hat OpenShift Container Platform Operator Update Information Checker You can use the application to search for Operator Lifecycle Manager-based Operators and verify the available Operator version per update channel across different versions of OpenShift Container Platform. Cluster Version Operator-based Operators are not included. 5.5.2.2. Changing the update channel for an Operator You can change the update channel for an Operator by using the OpenShift Container Platform web console. Tip If the approval strategy in the subscription is set to Automatic , the update process initiates as soon as a new Operator version is available in the selected channel. If the approval strategy is set to Manual , you must manually approve pending updates. Prerequisites An Operator previously installed using Operator Lifecycle Manager (OLM). Procedure In the Administrator perspective of the web console, navigate to Operators Installed Operators . Click the name of the Operator you want to change the update channel for. Click the Subscription tab. Click the name of the update channel under Update channel . Click the newer update channel that you want to change to, then click Save . For subscriptions with an Automatic approval strategy, the update begins automatically. Navigate back to the Operators Installed Operators page to monitor the progress of the update. When complete, the status changes to Succeeded and Up to date . For subscriptions with a Manual approval strategy, you can manually approve the update from the Subscription tab. 5.5.2.3. Manually approving a pending Operator update If an installed Operator has the approval strategy in its subscription set to Manual , when new updates are released in its current update channel, the update must be manually approved before installation can begin. Prerequisites An Operator previously installed using Operator Lifecycle Manager (OLM). Procedure In the Administrator perspective of the OpenShift Container Platform web console, navigate to Operators Installed Operators . Operators that have a pending update display a status with Upgrade available . Click the name of the Operator you want to update. Click the Subscription tab. Any updates requiring approval are displayed to Upgrade status . For example, it might display 1 requires approval . Click 1 requires approval , then click Preview Install Plan . Review the resources that are listed as available for update. When satisfied, click Approve . Navigate back to the Operators Installed Operators page to monitor the progress of the update. When complete, the status changes to Succeeded and Up to date . 5.5.3. Managing the Compliance Operator This section describes the lifecycle of security content, including how to use an updated version of compliance content and how to create a custom ProfileBundle object. 5.5.3.1. ProfileBundle CR example The ProfileBundle object requires two pieces of information: the URL of a container image that contains the contentImage and the file that contains the compliance content. The contentFile parameter is relative to the root of the file system. You can define the built-in rhcos4 ProfileBundle object as shown in the following example: apiVersion: compliance.openshift.io/v1alpha1 kind: ProfileBundle metadata: creationTimestamp: "2022-10-19T12:06:30Z" finalizers: - profilebundle.finalizers.compliance.openshift.io generation: 1 name: rhcos4 namespace: openshift-compliance resourceVersion: "46741" uid: 22350850-af4a-4f5c-9a42-5e7b68b82d7d spec: contentFile: ssg-rhcos4-ds.xml 1 contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:900e... 2 status: conditions: - lastTransitionTime: "2022-10-19T12:07:51Z" message: Profile bundle successfully parsed reason: Valid status: "True" type: Ready dataStreamStatus: VALID 1 Location of the file containing the compliance content. 2 Content image location. Important The base image used for the content images must include coreutils . 5.5.3.2. Updating security content Security content is included as container images that the ProfileBundle objects refer to. To accurately track updates to ProfileBundles and the custom resources parsed from the bundles such as rules or profiles, identify the container image with the compliance content using a digest instead of a tag: USD oc -n openshift-compliance get profilebundles rhcos4 -oyaml Example output apiVersion: compliance.openshift.io/v1alpha1 kind: ProfileBundle metadata: creationTimestamp: "2022-10-19T12:06:30Z" finalizers: - profilebundle.finalizers.compliance.openshift.io generation: 1 name: rhcos4 namespace: openshift-compliance resourceVersion: "46741" uid: 22350850-af4a-4f5c-9a42-5e7b68b82d7d spec: contentFile: ssg-rhcos4-ds.xml contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:900e... 1 status: conditions: - lastTransitionTime: "2022-10-19T12:07:51Z" message: Profile bundle successfully parsed reason: Valid status: "True" type: Ready dataStreamStatus: VALID 1 Security container image. Each ProfileBundle is backed by a deployment. When the Compliance Operator detects that the container image digest has changed, the deployment is updated to reflect the change and parse the content again. Using the digest instead of a tag ensures that you use a stable and predictable set of profiles. 5.5.3.3. Additional resources The Compliance Operator is supported in a restricted network environment. For more information, see Using Operator Lifecycle Manager on restricted networks . 5.5.4. Uninstalling the Compliance Operator You can remove the OpenShift Compliance Operator from your cluster by using the OpenShift Container Platform web console or the CLI. 5.5.4.1. Uninstalling the OpenShift Compliance Operator from OpenShift Container Platform using the web console To remove the Compliance Operator, you must first delete the objects in the namespace. After the objects are removed, you can remove the Operator and its namespace by deleting the openshift-compliance project. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. The OpenShift Compliance Operator must be installed. Procedure To remove the Compliance Operator by using the OpenShift Container Platform web console: Go to the Operators Installed Operators Compliance Operator page. Click All instances . In All namespaces , click the Options menu and delete all ScanSettingBinding, ComplainceSuite, ComplianceScan, and ProfileBundle objects. Switch to the Administration Operators Installed Operators page. Click the Options menu on the Compliance Operator entry and select Uninstall Operator . Switch to the Home Projects page. Search for 'compliance'. Click the Options menu to the openshift-compliance project, and select Delete Project . Confirm the deletion by typing openshift-compliance in the dialog box, and click Delete . 5.5.4.2. Uninstalling the OpenShift Compliance Operator from OpenShift Container Platform using the CLI To remove the Compliance Operator, you must first delete the objects in the namespace. After the objects are removed, you can remove the Operator and its namespace by deleting the openshift-compliance project. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. The OpenShift Compliance Operator must be installed. Procedure Delete all objects in the namespace. Delete the ScanSettingBinding objects: USD oc delete ssb --all -n openshift-compliance Delete the ScanSetting objects: USD oc delete ss --all -n openshift-compliance Delete the ComplianceSuite objects: USD oc delete suite --all -n openshift-compliance Delete the ComplianceScan objects: USD oc delete scan --all -n openshift-compliance Delete the ProfileBundle objects: USD oc delete profilebundle.compliance --all -n openshift-compliance Delete the Subscription object: USD oc delete sub --all -n openshift-compliance Delete the CSV object: USD oc delete csv --all -n openshift-compliance Delete the project: USD oc delete project openshift-compliance Example output project.project.openshift.io "openshift-compliance" deleted Verification Confirm the namespace is deleted: USD oc get project/openshift-compliance Example output Error from server (NotFound): namespaces "openshift-compliance" not found 5.6. Compliance Operator scan management 5.6.1. Supported compliance profiles There are several profiles available as part of the Compliance Operator (CO) installation. While you can use the following profiles to assess gaps in a cluster, usage alone does not infer or guarantee compliance with a particular profile and is not an auditor. In order to be compliant or certified under these various standards, you need to engage an authorized auditor such as a Qualified Security Assessor (QSA), Joint Authorization Board (JAB), or other industry recognized regulatory authority to assess your environment. You are required to work with an authorized auditor to achieve compliance with a standard. For more information on compliance support for all Red Hat products, see Product Compliance . Important The Compliance Operator might report incorrect results on some managed platforms, such as OpenShift Dedicated and Azure Red Hat OpenShift. For more information, see the Red Hat Knowledgebase Solution #6983418 . 5.6.1.1. Compliance profiles The Compliance Operator provides profiles to meet industry standard benchmarks. Note The following tables reflect the latest available profiles in the Compliance Operator. 5.6.1.1.1. CIS compliance profiles Table 5.1. Supported CIS compliance profiles Profile Profile title Application Industry compliance benchmark Supported architectures Supported platforms ocp4-cis [1] CIS Red Hat OpenShift Container Platform Benchmark v1.5.0 Platform CIS Benchmarks TM [1] x86_64 ppc64le s390x ocp4-cis-1-4 [3] CIS Red Hat OpenShift Container Platform Benchmark v1.4.0 Platform CIS Benchmarks TM [4] x86_64 ppc64le s390x ocp4-cis-1-5 CIS Red Hat OpenShift Container Platform Benchmark v1.5.0 Platform CIS Benchmarks TM [4] x86_64 ppc64le s390x ocp4-cis-node [1] CIS Red Hat OpenShift Container Platform Benchmark v1.5.0 Node [2] CIS Benchmarks TM [4] x86_64 ppc64le s390x Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) ocp4-cis-node-1-4 [3] CIS Red Hat OpenShift Container Platform Benchmark v1.4.0 Node [2] CIS Benchmarks TM [4] x86_64 ppc64le s390x Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) ocp4-cis-node-1-5 CIS Red Hat OpenShift Container Platform Benchmark v1.5.0 Node [2] CIS Benchmarks TM [4] x86_64 ppc64le s390x Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) The ocp4-cis and ocp4-cis-node profiles maintain the most up-to-date version of the CIS benchmark as it becomes available in the Compliance Operator. If you want to adhere to a specific version, such as CIS v1.4.0, use the ocp4-cis-1-4 and ocp4-cis-node-1-4 profiles. Node profiles must be used with the relevant Platform profile. For more information, see Compliance Operator profile types . CIS v1.4.0 is superceded by CIS v1.5.0. It is recommended to apply the latest profile to your environment. To locate the CIS OpenShift Container Platform v4 Benchmark, go to CIS Benchmarks and click Download Latest CIS Benchmark , where you can then register to download the benchmark. 5.6.1.1.2. Essential Eight compliance profiles Table 5.2. Supported Essential Eight compliance profiles Profile Profile title Application Industry compliance benchmark Supported architectures Supported platforms ocp4-e8 Australian Cyber Security Centre (ACSC) Essential Eight Platform ACSC Hardening Linux Workstations and Servers x86_64 rhcos4-e8 Australian Cyber Security Centre (ACSC) Essential Eight Node ACSC Hardening Linux Workstations and Servers x86_64 Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) 5.6.1.1.3. FedRAMP High compliance profiles Table 5.3. Supported FedRAMP High compliance profiles Profile Profile title Application Industry compliance benchmark Supported architectures Supported platforms ocp4-high [1] NIST 800-53 High-Impact Baseline for Red Hat OpenShift - Platform level Platform NIST SP-800-53 Release Search x86_64 ocp4-high-node [1] NIST 800-53 High-Impact Baseline for Red Hat OpenShift - Node level Node [2] NIST SP-800-53 Release Search x86_64 Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) ocp4-high-node-rev-4 NIST 800-53 High-Impact Baseline for Red Hat OpenShift - Node level Node [2] NIST SP-800-53 Release Search x86_64 Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) ocp4-high-rev-4 NIST 800-53 High-Impact Baseline for Red Hat OpenShift - Platform level Platform NIST SP-800-53 Release Search x86_64 rhcos4-high [1] NIST 800-53 High-Impact Baseline for Red Hat Enterprise Linux CoreOS Node NIST SP-800-53 Release Search x86_64 Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) rhcos4-high-rev-4 NIST 800-53 High-Impact Baseline for Red Hat Enterprise Linux CoreOS Node NIST SP-800-53 Release Search x86_64 Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) The ocp4-high , ocp4-high-node and rhcos4-high profiles maintain the most up-to-date version of the FedRAMP High standard as it becomes available in the Compliance Operator. If you want to adhere to a specific version, such as FedRAMP high R4, use the ocp4-high-rev-4 and ocp4-high-node-rev-4 profiles. Node profiles must be used with the relevant Platform profile. For more information, see Compliance Operator profile types . 5.6.1.1.4. FedRAMP Moderate compliance profiles Table 5.4. Supported FedRAMP Moderate compliance profiles Profile Profile title Application Industry compliance benchmark Supported architectures Supported platforms ocp4-moderate [1] NIST 800-53 Moderate-Impact Baseline for Red Hat OpenShift - Platform level Platform NIST SP-800-53 Release Search x86_64 ppc64le s390x ocp4-moderate-node [1] NIST 800-53 Moderate-Impact Baseline for Red Hat OpenShift - Node level Node [2] NIST SP-800-53 Release Search x86_64 ppc64le s390x Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) ocp4-moderate-node-rev-4 NIST 800-53 Moderate-Impact Baseline for Red Hat OpenShift - Node level Node [2] NIST SP-800-53 Release Search x86_64 ppc64le s390x Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) ocp4-moderate-rev-4 NIST 800-53 Moderate-Impact Baseline for Red Hat OpenShift - Platform level Platform NIST SP-800-53 Release Search x86_64 ppc64le s390x rhcos4-moderate [1] NIST 800-53 Moderate-Impact Baseline for Red Hat Enterprise Linux CoreOS Node NIST SP-800-53 Release Search x86_64 Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) rhcos4-moderate-rev-4 NIST 800-53 Moderate-Impact Baseline for Red Hat Enterprise Linux CoreOS Node NIST SP-800-53 Release Search x86_64 Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) The ocp4-moderate , ocp4-moderate-node and rhcos4-moderate profiles maintain the most up-to-date version of the FedRAMP Moderate standard as it becomes available in the Compliance Operator. If you want to adhere to a specific version, such as FedRAMP Moderate R4, use the ocp4-moderate-rev-4 and ocp4-moderate-node-rev-4 profiles. Node profiles must be used with the relevant Platform profile. For more information, see Compliance Operator profile types . 5.6.1.1.5. NERC-CIP compliance profiles Table 5.5. Supported NERC-CIP compliance profiles Profile Profile title Application Industry compliance benchmark Supported architectures Supported platforms ocp4-nerc-cip North American Electric Reliability Corporation (NERC) Critical Infrastructure Protection (CIP) cybersecurity standards profile for the OpenShift Container Platform - Platform level Platform NERC CIP Standards x86_64 ocp4-nerc-cip-node North American Electric Reliability Corporation (NERC) Critical Infrastructure Protection (CIP) cybersecurity standards profile for the OpenShift Container Platform - Node level Node [1] NERC CIP Standards x86_64 Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) rhcos4-nerc-cip North American Electric Reliability Corporation (NERC) Critical Infrastructure Protection (CIP) cybersecurity standards profile for Red Hat Enterprise Linux CoreOS Node NERC CIP Standards x86_64 Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) Node profiles must be used with the relevant Platform profile. For more information, see Compliance Operator profile types . 5.6.1.1.6. PCI-DSS compliance profiles Table 5.6. Supported PCI-DSS compliance profiles Profile Profile title Application Industry compliance benchmark Supported architectures Supported platforms ocp4-pci-dss [1] PCI-DSS v4 Control Baseline for OpenShift Container Platform 4 Platform PCI Security Standards (R) Council Document Library x86_64 ocp4-pci-dss-3-2 [3] PCI-DSS v3.2.1 Control Baseline for OpenShift Container Platform 4 Platform PCI Security Standards (R) Council Document Library x86_64 ppc64le s390x ocp4-pci-dss-4-0 PCI-DSS v4 Control Baseline for OpenShift Container Platform 4 Platform PCI Security Standards (R) Council Document Library x86_64 ocp4-pci-dss-node [1] PCI-DSS v4 Control Baseline for OpenShift Container Platform 4 Node [2] PCI Security Standards (R) Council Document Library x86_64 Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) ocp4-pci-dss-node-3-2 [3] PCI-DSS v3.2.1 Control Baseline for OpenShift Container Platform 4 Node [2] PCI Security Standards (R) Council Document Library x86_64 ppc64le s390x Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) ocp4-pci-dss-node-4-0 PCI-DSS v4 Control Baseline for OpenShift Container Platform 4 Node [2] PCI Security Standards (R) Council Document Library x86_64 Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) The ocp4-pci-dss and ocp4-pci-dss-node profiles maintain the most up-to-date version of the PCI-DSS standard as it becomes available in the Compliance Operator. If you want to adhere to a specific version, such as PCI-DSS v3.2.1, use the ocp4-pci-dss-3-2 and ocp4-pci-dss-node-3-2 profiles. Node profiles must be used with the relevant Platform profile. For more information, see Compliance Operator profile types . PCI-DSS v3.2.1 is superceded by PCI-DSS v4. It is recommended to apply the latest profile to your environment. 5.6.1.1.7. STIG compliance profiles Table 5.7. Supported STIG compliance profiles Profile Profile title Application Industry compliance benchmark Supported architectures Supported platforms ocp4-stig [1] Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) for Red Hat Openshift Platform DISA-STIG x86_64 ocp4-stig-node [1] Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) for Red Hat Openshift Node [2] DISA-STIG x86_64 Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) ocp4-stig-node-v1r1 [3] Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) for Red Hat Openshift V1R1 Node [2] DISA-STIG x86_64 Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) ocp4-stig-node-v2r1 Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) for Red Hat Openshift V2R1 Node [2] DISA-STIG x86_64 Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) ocp4-stig-v1r1 [3] Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) for Red Hat Openshift V1R1 Platform DISA-STIG x86_64 ocp4-stig-v2r1 Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) for Red Hat Openshift V2R1 Platform DISA-STIG x86_64 rhcos4-stig Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) for Red Hat Openshift Node DISA-STIG x86_64 Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) rhcos4-stig-v1r1 [3] Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) for Red Hat Openshift V1R1 Node DISA-STIG [3] x86_64 Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) rhcos4-stig-v2r1 Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) for Red Hat Openshift V2R1 Node DISA-STIG x86_64 Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) The ocp4-stig , ocp4-stig-node and rhcos4-stig profiles maintain the most up-to-date version of the DISA-STIG benchmark as it becomes available in the Compliance Operator. If you want to adhere to a specific version, such as DISA-STIG V2R1, use the ocp4-stig-v2r1 and ocp4-stig-node-v2r1 profiles. Node profiles must be used with the relevant Platform profile. For more information, see Compliance Operator profile types . DISA-STIG V1R1 is superceded by DISA-STIG V2R1. It is recommended to apply the latest profile to your environment. 5.6.1.1.8. About extended compliance profiles Some compliance profiles have controls that require following industry best practices, resulting in some profiles extending others. Combining the Center for Internet Security (CIS) best practices with National Institute of Standards and Technology (NIST) security frameworks establishes a path to a secure and compliant environment. For example, the NIST High-Impact and Moderate-Impact profiles extend the CIS profile to achieve compliance. As a result, extended compliance profiles eliminate the need to run both profiles in a single cluster. Table 5.8. Profile extensions Profile Extends ocp4-pci-dss ocp4-cis ocp4-pci-dss-node ocp4-cis-node ocp4-high ocp4-cis ocp4-high-node ocp4-cis-node ocp4-moderate ocp4-cis ocp4-moderate-node ocp4-cis-node ocp4-nerc-cip ocp4-moderate ocp4-nerc-cip-node ocp4-moderate-node 5.6.1.2. Additional resources Compliance Operator profile types 5.6.2. Compliance Operator scans The ScanSetting and ScanSettingBinding APIs are recommended to run compliance scans with the Compliance Operator. For more information on these API objects, run: USD oc explain scansettings or USD oc explain scansettingbindings 5.6.2.1. Running compliance scans You can run a scan using the Center for Internet Security (CIS) profiles. For convenience, the Compliance Operator creates a ScanSetting object with reasonable defaults on startup. This ScanSetting object is named default . Note For all-in-one control plane and worker nodes, the compliance scan runs twice on the worker and control plane nodes. The compliance scan might generate inconsistent scan results. You can avoid inconsistent results by defining only a single role in the ScanSetting object. For more information about inconsistent scan results, see Compliance Operator shows INCONSISTENT scan result with worker node . Procedure Inspect the ScanSetting object by running the following command: USD oc describe scansettings default -n openshift-compliance Example output Name: default Namespace: openshift-compliance Labels: <none> Annotations: <none> API Version: compliance.openshift.io/v1alpha1 Kind: ScanSetting Max Retry On Timeout: 3 Metadata: Creation Timestamp: 2024-07-16T14:56:42Z Generation: 2 Resource Version: 91655682 UID: 50358cf1-57a8-4f69-ac50-5c7a5938e402 Raw Result Storage: Node Selector: node-role.kubernetes.io/master: Pv Access Modes: ReadWriteOnce 1 Rotation: 3 2 Size: 1Gi 3 Storage Class Name: standard 4 Tolerations: Effect: NoSchedule Key: node-role.kubernetes.io/master Operator: Exists Effect: NoExecute Key: node.kubernetes.io/not-ready Operator: Exists Toleration Seconds: 300 Effect: NoExecute Key: node.kubernetes.io/unreachable Operator: Exists Toleration Seconds: 300 Effect: NoSchedule Key: node.kubernetes.io/memory-pressure Operator: Exists Roles: master 5 worker 6 Scan Tolerations: 7 Operator: Exists Schedule: 0 1 * * * 8 Show Not Applicable: false Strict Node Scan: true Suspend: false Timeout: 30m Events: <none> 1 The Compliance Operator creates a persistent volume (PV) that contains the results of the scans. By default, the PV will use access mode ReadWriteOnce because the Compliance Operator cannot make any assumptions about the storage classes configured on the cluster. Additionally, ReadWriteOnce access mode is available on most clusters. If you need to fetch the scan results, you can do so by using a helper pod, which also binds the volume. Volumes that use the ReadWriteOnce access mode can be mounted by only one pod at time, so it is important to remember to delete the helper pods. Otherwise, the Compliance Operator will not be able to reuse the volume for subsequent scans. 2 The Compliance Operator keeps results of three subsequent scans in the volume; older scans are rotated. 3 The Compliance Operator will allocate one GB of storage for the scan results. 4 The scansetting.rawResultStorage.storageClassName field specifies the storageClassName value to use when creating the PersistentVolumeClaim object to store the raw results. The default value is null, which will attempt to use the default storage class configured in the cluster. If there is no default class specified, then you must set a default class. 5 6 If the scan setting uses any profiles that scan cluster nodes, scan these node roles. 7 The default scan setting object scans all the nodes. 8 The default scan setting object runs scans at 01:00 each day. As an alternative to the default scan setting, you can use default-auto-apply , which has the following settings: Name: default-auto-apply Namespace: openshift-compliance Labels: <none> Annotations: <none> API Version: compliance.openshift.io/v1alpha1 Auto Apply Remediations: true 1 Auto Update Remediations: true 2 Kind: ScanSetting Metadata: Creation Timestamp: 2022-10-18T20:21:00Z Generation: 1 Managed Fields: API Version: compliance.openshift.io/v1alpha1 Fields Type: FieldsV1 fieldsV1: f:autoApplyRemediations: f:autoUpdateRemediations: f:rawResultStorage: .: f:nodeSelector: .: f:node-role.kubernetes.io/master: f:pvAccessModes: f:rotation: f:size: f:tolerations: f:roles: f:scanTolerations: f:schedule: f:showNotApplicable: f:strictNodeScan: Manager: compliance-operator Operation: Update Time: 2022-10-18T20:21:00Z Resource Version: 38840 UID: 8cb0967d-05e0-4d7a-ac1c-08a7f7e89e84 Raw Result Storage: Node Selector: node-role.kubernetes.io/master: Pv Access Modes: ReadWriteOnce Rotation: 3 Size: 1Gi Tolerations: Effect: NoSchedule Key: node-role.kubernetes.io/master Operator: Exists Effect: NoExecute Key: node.kubernetes.io/not-ready Operator: Exists Toleration Seconds: 300 Effect: NoExecute Key: node.kubernetes.io/unreachable Operator: Exists Toleration Seconds: 300 Effect: NoSchedule Key: node.kubernetes.io/memory-pressure Operator: Exists Roles: master worker Scan Tolerations: Operator: Exists Schedule: 0 1 * * * Show Not Applicable: false Strict Node Scan: true Events: <none> 1 2 Setting autoUpdateRemediations and autoApplyRemediations flags to true allows you to easily create ScanSetting objects that auto-remediate without extra steps. Create a ScanSettingBinding object that binds to the default ScanSetting object and scans the cluster using the cis and cis-node profiles. For example: apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: cis-compliance namespace: openshift-compliance profiles: - name: ocp4-cis-node kind: Profile apiGroup: compliance.openshift.io/v1alpha1 - name: ocp4-cis kind: Profile apiGroup: compliance.openshift.io/v1alpha1 settingsRef: name: default kind: ScanSetting apiGroup: compliance.openshift.io/v1alpha1 Create the ScanSettingBinding object by running: USD oc create -f <file-name>.yaml -n openshift-compliance At this point in the process, the ScanSettingBinding object is reconciled and based on the Binding and the Bound settings. The Compliance Operator creates a ComplianceSuite object and the associated ComplianceScan objects. Follow the compliance scan progress by running: USD oc get compliancescan -w -n openshift-compliance The scans progress through the scanning phases and eventually reach the DONE phase when complete. In most cases, the result of the scan is NON-COMPLIANT . You can review the scan results and start applying remediations to make the cluster compliant. See Managing Compliance Operator remediation for more information. 5.6.2.2. Setting custom storage size for results While the custom resources such as ComplianceCheckResult represent an aggregated result of one check across all scanned nodes, it can be useful to review the raw results as produced by the scanner. The raw results are produced in the ARF format and can be large (tens of megabytes per node), it is impractical to store them in a Kubernetes resource backed by the etcd key-value store. Instead, every scan creates a persistent volume (PV) which defaults to 1GB size. Depending on your environment, you may want to increase the PV size accordingly. This is done using the rawResultStorage.size attribute that is exposed in both the ScanSetting and ComplianceScan resources. A related parameter is rawResultStorage.rotation which controls how many scans are retained in the PV before the older scans are rotated. The default value is 3, setting the rotation policy to 0 disables the rotation. Given the default rotation policy and an estimate of 100MB per a raw ARF scan report, you can calculate the right PV size for your environment. 5.6.2.2.1. Using custom result storage values Because OpenShift Container Platform can be deployed in a variety of public clouds or bare metal, the Compliance Operator cannot determine available storage configurations. By default, the Compliance Operator will try to create the PV for storing results using the default storage class of the cluster, but a custom storage class can be configured using the rawResultStorage.StorageClassName attribute. Important If your cluster does not specify a default storage class, this attribute must be set. Configure the ScanSetting custom resource to use a standard storage class and create persistent volumes that are 10GB in size and keep the last 10 results: Example ScanSetting CR apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: default namespace: openshift-compliance rawResultStorage: storageClassName: standard rotation: 10 size: 10Gi roles: - worker - master scanTolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists schedule: '0 1 * * *' 5.6.2.3. Scheduling the result server pod on a worker node The result server pod mounts the persistent volume (PV) that stores the raw Asset Reporting Format (ARF) scan results. The nodeSelector and tolerations attributes enable you to configure the location of the result server pod. This is helpful for those environments where control plane nodes are not permitted to mount persistent volumes. Procedure Create a ScanSetting custom resource (CR) for the Compliance Operator: Define the ScanSetting CR, and save the YAML file, for example, rs-workers.yaml : apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: rs-on-workers namespace: openshift-compliance rawResultStorage: nodeSelector: node-role.kubernetes.io/worker: "" 1 pvAccessModes: - ReadWriteOnce rotation: 3 size: 1Gi tolerations: - operator: Exists 2 roles: - worker - master scanTolerations: - operator: Exists schedule: 0 1 * * * 1 The Compliance Operator uses this node to store scan results in ARF format. 2 The result server pod tolerates all taints. To create the ScanSetting CR, run the following command: USD oc create -f rs-workers.yaml Verification To verify that the ScanSetting object is created, run the following command: USD oc get scansettings rs-on-workers -n openshift-compliance -o yaml Example output apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: creationTimestamp: "2021-11-19T19:36:36Z" generation: 1 name: rs-on-workers namespace: openshift-compliance resourceVersion: "48305" uid: 43fdfc5f-15a7-445a-8bbc-0e4a160cd46e rawResultStorage: nodeSelector: node-role.kubernetes.io/worker: "" pvAccessModes: - ReadWriteOnce rotation: 3 size: 1Gi tolerations: - operator: Exists roles: - worker - master scanTolerations: - operator: Exists schedule: 0 1 * * * strictNodeScan: true 5.6.2.4. ScanSetting Custom Resource The ScanSetting Custom Resource now allows you to override the default CPU and memory limits of scanner pods through the scan limits attribute. The Compliance Operator will use defaults of 500Mi memory, 100m CPU for the scanner container, and 200Mi memory with 100m CPU for the api-resource-collector container. To set the memory limits of the Operator, modify the Subscription object if installed through OLM or the Operator deployment itself. To increase the default CPU and memory limits of the Compliance Operator, see Increasing Compliance Operator resource limits . Important Increasing the memory limit for the Compliance Operator or the scanner pods is needed if the default limits are not sufficient and the Operator or scanner pods are ended by the Out Of Memory (OOM) process. 5.6.2.5. Configuring the hosted control planes management cluster If you are hosting your own Hosted control plane or Hypershift environment and want to scan a Hosted Cluster from the management cluster, you will need to set the name and prefix namespace for the target Hosted Cluster. You can achieve this by creating a TailoredProfile . Important This procedure only applies to users managing their own hosted control planes environment. Note Only ocp4-cis and ocp4-pci-dss profiles are supported in hosted control planes management clusters. Prerequisites The Compliance Operator is installed in the management cluster. Procedure Obtain the name and namespace of the hosted cluster to be scanned by running the following command: USD oc get hostedcluster -A Example output NAMESPACE NAME VERSION KUBECONFIG PROGRESS AVAILABLE PROGRESSING MESSAGE local-cluster 79136a1bdb84b3c13217 4.13.5 79136a1bdb84b3c13217-admin-kubeconfig Completed True False The hosted control plane is available In the management cluster, create a TailoredProfile extending the scan Profile and define the name and namespace of the Hosted Cluster to be scanned: Example management-tailoredprofile.yaml apiVersion: compliance.openshift.io/v1alpha1 kind: TailoredProfile metadata: name: hypershift-cisk57aw88gry namespace: openshift-compliance spec: description: This profile test required rules extends: ocp4-cis 1 title: Management namespace profile setValues: - name: ocp4-hypershift-cluster rationale: This value is used for HyperShift version detection value: 79136a1bdb84b3c13217 2 - name: ocp4-hypershift-namespace-prefix rationale: This value is used for HyperShift control plane namespace detection value: local-cluster 3 1 Variable. Only ocp4-cis and ocp4-pci-dss profiles are supported in hosted control planes management clusters. 2 The value is the NAME from the output in the step. 3 The value is the NAMESPACE from the output in the step. Create the TailoredProfile : USD oc create -n openshift-compliance -f mgmt-tp.yaml 5.6.2.6. Applying resource requests and limits When the kubelet starts a container as part of a Pod, the kubelet passes that container's requests and limits for memory and CPU to the container runtime. In Linux, the container runtime configures the kernel cgroups that apply and enforce the limits you defined. The CPU limit defines how much CPU time the container can use. During each scheduling interval, the Linux kernel checks to see if this limit is exceeded. If so, the kernel waits before allowing the cgroup to resume execution. If several different containers (cgroups) want to run on a contended system, workloads with larger CPU requests are allocated more CPU time than workloads with small requests. The memory request is used during Pod scheduling. On a node that uses cgroups v2, the container runtime might use the memory request as a hint to set memory.min and memory.low values. If a container attempts to allocate more memory than this limit, the Linux kernel out-of-memory subsystem activates and intervenes by stopping one of the processes in the container that tried to allocate memory. The memory limit for the Pod or container can also apply to pages in memory-backed volumes, such as an emptyDir. The kubelet tracks tmpfs emptyDir volumes as container memory is used, rather than as local ephemeral storage. If a container exceeds its memory request and the node that it runs on becomes short of memory overall, the Pod's container might be evicted. Important A container may not exceed its CPU limit for extended periods. Container run times do not stop Pods or containers for excessive CPU usage. To determine whether a container cannot be scheduled or is being killed due to resource limits, see Troubleshooting the Compliance Operator . 5.6.2.7. Scheduling Pods with container resource requests When a Pod is created, the scheduler selects a Node for the Pod to run on. Each node has a maximum capacity for each resource type in the amount of CPU and memory it can provide for the Pods. The scheduler ensures that the sum of the resource requests of the scheduled containers is less than the capacity nodes for each resource type. Although memory or CPU resource usage on nodes is very low, the scheduler might still refuse to place a Pod on a node if the capacity check fails to protect against a resource shortage on a node. For each container, you can specify the following resource limits and request: spec.containers[].resources.limits.cpu spec.containers[].resources.limits.memory spec.containers[].resources.limits.hugepages-<size> spec.containers[].resources.requests.cpu spec.containers[].resources.requests.memory spec.containers[].resources.requests.hugepages-<size> Although you can specify requests and limits for only individual containers, it is also useful to consider the overall resource requests and limits for a pod. For a particular resource, a container resource request or limit is the sum of the resource requests or limits of that type for each container in the pod. Example container resource requests and limits apiVersion: v1 kind: Pod metadata: name: frontend spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: app image: images.my-company.example/app:v4 resources: requests: 1 memory: "64Mi" cpu: "250m" limits: 2 memory: "128Mi" cpu: "500m" securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] - name: log-aggregator image: images.my-company.example/log-aggregator:v6 resources: requests: memory: "64Mi" cpu: "250m" limits: memory: "128Mi" cpu: "500m" securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] 1 The container is requesting 64 Mi of memory and 250 m CPU. 2 The container's limits are 128 Mi of memory and 500 m CPU. 5.6.3. Tailoring the Compliance Operator While the Compliance Operator comes with ready-to-use profiles, they must be modified to fit the organizations' needs and requirements. The process of modifying a profile is called tailoring . The Compliance Operator provides the TailoredProfile object to help tailor profiles. 5.6.3.1. Creating a new tailored profile You can write a tailored profile from scratch by using the TailoredProfile object. Set an appropriate title and description and leave the extends field empty. Indicate to the Compliance Operator what type of scan this custom profile will generate: Node scan: Scans the Operating System. Platform scan: Scans the OpenShift Container Platform configuration. Procedure Set the following annotation on the TailoredProfile object: Example new-profile.yaml apiVersion: compliance.openshift.io/v1alpha1 kind: TailoredProfile metadata: name: new-profile annotations: compliance.openshift.io/product-type: Node 1 spec: extends: ocp4-cis-node 2 description: My custom profile 3 title: Custom profile 4 enableRules: - name: ocp4-etcd-unique-ca rationale: We really need to enable this disableRules: - name: ocp4-file-groupowner-cni-conf rationale: This does not apply to the cluster 1 Set Node or Platform accordingly. 2 The extends field is optional. 3 Use the description field to describe the function of the new TailoredProfile object. 4 Give your TailoredProfile object a title with the title field. Note Adding the -node suffix to the name field of the TailoredProfile object is similar to adding the Node product type annotation and generates an Operating System scan. 5.6.3.2. Using tailored profiles to extend existing ProfileBundles While the TailoredProfile CR enables the most common tailoring operations, the XCCDF standard allows even more flexibility in tailoring OpenSCAP profiles. In addition, if your organization has been using OpenScap previously, you may have an existing XCCDF tailoring file and can reuse it. The ComplianceSuite object contains an optional TailoringConfigMap attribute that you can point to a custom tailoring file. The value of the TailoringConfigMap attribute is a name of a config map, which must contain a key called tailoring.xml and the value of this key is the tailoring contents. Procedure Browse the available rules for the Red Hat Enterprise Linux CoreOS (RHCOS) ProfileBundle : USD oc get rules.compliance -n openshift-compliance -l compliance.openshift.io/profile-bundle=rhcos4 Browse the available variables in the same ProfileBundle : USD oc get variables.compliance -n openshift-compliance -l compliance.openshift.io/profile-bundle=rhcos4 Create a tailored profile named nist-moderate-modified : Choose which rules you want to add to the nist-moderate-modified tailored profile. This example extends the rhcos4-moderate profile by disabling two rules and changing one value. Use the rationale value to describe why these changes were made: Example new-profile-node.yaml apiVersion: compliance.openshift.io/v1alpha1 kind: TailoredProfile metadata: name: nist-moderate-modified spec: extends: rhcos4-moderate description: NIST moderate profile title: My modified NIST moderate profile disableRules: - name: rhcos4-file-permissions-var-log-messages rationale: The file contains logs of error messages in the system - name: rhcos4-account-disable-post-pw-expiration rationale: No need to check this as it comes from the IdP setValues: - name: rhcos4-var-selinux-state rationale: Organizational requirements value: permissive Table 5.9. Attributes for spec variables Attribute Description extends Name of the Profile object upon which this TailoredProfile is built. title Human-readable title of the TailoredProfile . disableRules A list of name and rationale pairs. Each name refers to a name of a rule object that is to be disabled. The rationale value is human-readable text describing why the rule is disabled. manualRules A list of name and rationale pairs. When a manual rule is added, the check result status will always be manual and remediation will not be generated. This attribute is automatic and by default has no values when set as a manual rule. enableRules A list of name and rationale pairs. Each name refers to a name of a rule object that is to be enabled. The rationale value is human-readable text describing why the rule is enabled. description Human-readable text describing the TailoredProfile . setValues A list of name, rationale, and value groupings. Each name refers to a name of the value set. The rationale is human-readable text describing the set. The value is the actual setting. Add the tailoredProfile.spec.manualRules attribute: Example tailoredProfile.spec.manualRules.yaml apiVersion: compliance.openshift.io/v1alpha1 kind: TailoredProfile metadata: name: ocp4-manual-scc-check spec: extends: ocp4-cis description: This profile extends ocp4-cis by forcing the SCC check to always return MANUAL title: OCP4 CIS profile with manual SCC check manualRules: - name: ocp4-scc-limit-container-allowed-capabilities rationale: We use third party software that installs its own SCC with extra privileges Create the TailoredProfile object: USD oc create -n openshift-compliance -f new-profile-node.yaml 1 1 The TailoredProfile object is created in the default openshift-compliance namespace. Example output tailoredprofile.compliance.openshift.io/nist-moderate-modified created Define the ScanSettingBinding object to bind the new nist-moderate-modified tailored profile to the default ScanSetting object. Example new-scansettingbinding.yaml apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: nist-moderate-modified profiles: - apiGroup: compliance.openshift.io/v1alpha1 kind: Profile name: ocp4-moderate - apiGroup: compliance.openshift.io/v1alpha1 kind: TailoredProfile name: nist-moderate-modified settingsRef: apiGroup: compliance.openshift.io/v1alpha1 kind: ScanSetting name: default Create the ScanSettingBinding object: USD oc create -n openshift-compliance -f new-scansettingbinding.yaml Example output scansettingbinding.compliance.openshift.io/nist-moderate-modified created 5.6.4. Retrieving Compliance Operator raw results When proving compliance for your OpenShift Container Platform cluster, you might need to provide the scan results for auditing purposes. 5.6.4.1. Obtaining Compliance Operator raw results from a persistent volume Procedure The Compliance Operator generates and stores the raw results in a persistent volume. These results are in Asset Reporting Format (ARF). Explore the ComplianceSuite object: USD oc get compliancesuites nist-moderate-modified \ -o json -n openshift-compliance | jq '.status.scanStatuses[].resultsStorage' Example output { "name": "ocp4-moderate", "namespace": "openshift-compliance" } { "name": "nist-moderate-modified-master", "namespace": "openshift-compliance" } { "name": "nist-moderate-modified-worker", "namespace": "openshift-compliance" } This shows the persistent volume claims where the raw results are accessible. Verify the raw data location by using the name and namespace of one of the results: USD oc get pvc -n openshift-compliance rhcos4-moderate-worker Example output NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE rhcos4-moderate-worker Bound pvc-548f6cfe-164b-42fe-ba13-a07cfbc77f3a 1Gi RWO gp2 92m Fetch the raw results by spawning a pod that mounts the volume and copying the results: USD oc create -n openshift-compliance -f pod.yaml Example pod.yaml apiVersion: "v1" kind: Pod metadata: name: pv-extract spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: pv-extract-pod image: registry.access.redhat.com/ubi9/ubi command: ["sleep", "3000"] volumeMounts: - mountPath: "/workers-scan-results" name: workers-scan-vol securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: workers-scan-vol persistentVolumeClaim: claimName: rhcos4-moderate-worker After the pod is running, download the results: USD oc cp pv-extract:/workers-scan-results -n openshift-compliance . Important Spawning a pod that mounts the persistent volume will keep the claim as Bound . If the volume's storage class in use has permissions set to ReadWriteOnce , the volume is only mountable by one pod at a time. You must delete the pod upon completion, or it will not be possible for the Operator to schedule a pod and continue storing results in this location. After the extraction is complete, the pod can be deleted: USD oc delete pod pv-extract -n openshift-compliance 5.6.5. Managing Compliance Operator result and remediation Each ComplianceCheckResult represents a result of one compliance rule check. If the rule can be remediated automatically, a ComplianceRemediation object with the same name, owned by the ComplianceCheckResult is created. Unless requested, the remediations are not applied automatically, which gives an OpenShift Container Platform administrator the opportunity to review what the remediation does and only apply a remediation once it has been verified. Important Full remediation for Federal Information Processing Standards (FIPS) compliance requires enabling FIPS mode for the cluster. To enable FIPS mode, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . FIPS mode is supported on the following architectures: x86_64 ppc64le s390x 5.6.5.1. Filters for compliance check results By default, the ComplianceCheckResult objects are labeled with several useful labels that allow you to query the checks and decide on the steps after the results are generated. List checks that belong to a specific suite: USD oc get -n openshift-compliance compliancecheckresults \ -l compliance.openshift.io/suite=workers-compliancesuite List checks that belong to a specific scan: USD oc get -n openshift-compliance compliancecheckresults \ -l compliance.openshift.io/scan=workers-scan Not all ComplianceCheckResult objects create ComplianceRemediation objects. Only ComplianceCheckResult objects that can be remediated automatically do. A ComplianceCheckResult object has a related remediation if it is labeled with the compliance.openshift.io/automated-remediation label. The name of the remediation is the same as the name of the check. List all failing checks that can be remediated automatically: USD oc get -n openshift-compliance compliancecheckresults \ -l 'compliance.openshift.io/check-status=FAIL,compliance.openshift.io/automated-remediation' List all failing checks sorted by severity: USD oc get compliancecheckresults -n openshift-compliance \ -l 'compliance.openshift.io/check-status=FAIL,compliance.openshift.io/check-severity=high' Example output NAME STATUS SEVERITY nist-moderate-modified-master-configure-crypto-policy FAIL high nist-moderate-modified-master-coreos-pti-kernel-argument FAIL high nist-moderate-modified-master-disable-ctrlaltdel-burstaction FAIL high nist-moderate-modified-master-disable-ctrlaltdel-reboot FAIL high nist-moderate-modified-master-enable-fips-mode FAIL high nist-moderate-modified-master-no-empty-passwords FAIL high nist-moderate-modified-master-selinux-state FAIL high nist-moderate-modified-worker-configure-crypto-policy FAIL high nist-moderate-modified-worker-coreos-pti-kernel-argument FAIL high nist-moderate-modified-worker-disable-ctrlaltdel-burstaction FAIL high nist-moderate-modified-worker-disable-ctrlaltdel-reboot FAIL high nist-moderate-modified-worker-enable-fips-mode FAIL high nist-moderate-modified-worker-no-empty-passwords FAIL high nist-moderate-modified-worker-selinux-state FAIL high ocp4-moderate-configure-network-policies-namespaces FAIL high ocp4-moderate-fips-mode-enabled-on-all-nodes FAIL high List all failing checks that must be remediated manually: USD oc get -n openshift-compliance compliancecheckresults \ -l 'compliance.openshift.io/check-status=FAIL,!compliance.openshift.io/automated-remediation' The manual remediation steps are typically stored in the description attribute in the ComplianceCheckResult object. Table 5.10. ComplianceCheckResult Status ComplianceCheckResult Status Description PASS Compliance check ran to completion and passed. FAIL Compliance check ran to completion and failed. INFO Compliance check ran to completion and found something not severe enough to be considered an error. MANUAL Compliance check does not have a way to automatically assess the success or failure and must be checked manually. INCONSISTENT Compliance check reports different results from different sources, typically cluster nodes. ERROR Compliance check ran, but could not complete properly. NOT-APPLICABLE Compliance check did not run because it is not applicable or not selected. 5.6.5.2. Reviewing a remediation Review both the ComplianceRemediation object and the ComplianceCheckResult object that owns the remediation. The ComplianceCheckResult object contains human-readable descriptions of what the check does and the hardening trying to prevent, as well as other metadata like the severity and the associated security controls. The ComplianceRemediation object represents a way to fix the problem described in the ComplianceCheckResult . After first scan, check for remediations with the state MissingDependencies . Below is an example of a check and a remediation called sysctl-net-ipv4-conf-all-accept-redirects . This example is redacted to only show spec and status and omits metadata : spec: apply: false current: object: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/sysctl.d/75-sysctl_net_ipv4_conf_all_accept_redirects.conf mode: 0644 contents: source: data:,net.ipv4.conf.all.accept_redirects%3D0 outdated: {} status: applicationState: NotApplied The remediation payload is stored in the spec.current attribute. The payload can be any Kubernetes object, but because this remediation was produced by a node scan, the remediation payload in the above example is a MachineConfig object. For Platform scans, the remediation payload is often a different kind of an object (for example, a ConfigMap or Secret object), but typically applying that remediation is up to the administrator, because otherwise the Compliance Operator would have required a very broad set of permissions to manipulate any generic Kubernetes object. An example of remediating a Platform check is provided later in the text. To see exactly what the remediation does when applied, the MachineConfig object contents use the Ignition objects for the configuration. See the Ignition specification for further information about the format. In our example, the spec.config.storage.files[0].path attribute specifies the file that is being create by this remediation ( /etc/sysctl.d/75-sysctl_net_ipv4_conf_all_accept_redirects.conf ) and the spec.config.storage.files[0].contents.source attribute specifies the contents of that file. Note The contents of the files are URL-encoded. Use the following Python script to view the contents: USD echo "net.ipv4.conf.all.accept_redirects%3D0" | python3 -c "import sys, urllib.parse; print(urllib.parse.unquote(''.join(sys.stdin.readlines())))" Example output net.ipv4.conf.all.accept_redirects=0 Important The Compliance Operator does not automatically resolve dependency issues that can occur between remediations. Users should perform a rescan after remediations are applied to ensure accurate results. 5.6.5.3. Applying remediation when using customized machine config pools When you create a custom MachineConfigPool , add a label to the MachineConfigPool so that machineConfigPoolSelector present in the KubeletConfig can match the label with MachineConfigPool . Important Do not set protectKernelDefaults: false in the KubeletConfig file, because the MachineConfigPool object might fail to unpause unexpectedly after the Compliance Operator finishes applying remediation. Procedure List the nodes. USD oc get nodes -n openshift-compliance Example output NAME STATUS ROLES AGE VERSION ip-10-0-128-92.us-east-2.compute.internal Ready master 5h21m v1.29.4 ip-10-0-158-32.us-east-2.compute.internal Ready worker 5h17m v1.29.4 ip-10-0-166-81.us-east-2.compute.internal Ready worker 5h17m v1.29.4 ip-10-0-171-170.us-east-2.compute.internal Ready master 5h21m v1.29.4 ip-10-0-197-35.us-east-2.compute.internal Ready master 5h22m v1.29.4 Add a label to nodes. USD oc -n openshift-compliance \ label node ip-10-0-166-81.us-east-2.compute.internal \ node-role.kubernetes.io/<machine_config_pool_name>= Example output node/ip-10-0-166-81.us-east-2.compute.internal labeled Create custom MachineConfigPool CR. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: <machine_config_pool_name> labels: pools.operator.machineconfiguration.openshift.io/<machine_config_pool_name>: '' 1 spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,<machine_config_pool_name>]} nodeSelector: matchLabels: node-role.kubernetes.io/<machine_config_pool_name>: "" 1 The labels field defines label name to add for Machine config pool(MCP). Verify MCP created successfully. USD oc get mcp -w 5.6.5.4. Evaluating KubeletConfig rules against default configuration values OpenShift Container Platform infrastructure might contain incomplete configuration files at run time, and nodes assume default configuration values for missing configuration options. Some configuration options can be passed as command line arguments. As a result, the Compliance Operator cannot verify if the configuration file on the node is complete because it might be missing options used in the rule checks. To prevent false negative results where the default configuration value passes a check, the Compliance Operator uses the Node/Proxy API to fetch the configuration for each node in a node pool, then all configuration options that are consistent across nodes in the node pool are stored in a file that represents the configuration for all nodes within that node pool. This increases the accuracy of the scan results. No additional configuration changes are required to use this feature with default master and worker node pools configurations. 5.6.5.5. Scanning custom node pools The Compliance Operator does not maintain a copy of each node pool configuration. The Compliance Operator aggregates consistent configuration options for all nodes within a single node pool into one copy of the configuration file. The Compliance Operator then uses the configuration file for a particular node pool to evaluate rules against nodes within that pool. Procedure Add the example role to the ScanSetting object that will be stored in the ScanSettingBinding CR: apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: default namespace: openshift-compliance rawResultStorage: rotation: 3 size: 1Gi roles: - worker - master - example scanTolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists schedule: '0 1 * * *' Create a scan that uses the ScanSettingBinding CR: apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: cis namespace: openshift-compliance profiles: - apiGroup: compliance.openshift.io/v1alpha1 kind: Profile name: ocp4-cis - apiGroup: compliance.openshift.io/v1alpha1 kind: Profile name: ocp4-cis-node settingsRef: apiGroup: compliance.openshift.io/v1alpha1 kind: ScanSetting name: default Verification The Platform KubeletConfig rules are checked through the Node/Proxy object. You can find those rules by running the following command: USD oc get rules -o json | jq '.items[] | select(.checkType == "Platform") | select(.metadata.name | contains("ocp4-kubelet-")) | .metadata.name' 5.6.5.6. Remediating KubeletConfig sub pools KubeletConfig remediation labels can be applied to MachineConfigPool sub-pools. Procedure Add a label to the sub-pool MachineConfigPool CR: USD oc label mcp <sub-pool-name> pools.operator.machineconfiguration.openshift.io/<sub-pool-name>= 5.6.5.7. Applying a remediation The boolean attribute spec.apply controls whether the remediation should be applied by the Compliance Operator. You can apply the remediation by setting the attribute to true : USD oc -n openshift-compliance \ patch complianceremediations/<scan-name>-sysctl-net-ipv4-conf-all-accept-redirects \ --patch '{"spec":{"apply":true}}' --type=merge After the Compliance Operator processes the applied remediation, the status.ApplicationState attribute would change to Applied or to Error if incorrect. When a machine config remediation is applied, that remediation along with all other applied remediations are rendered into a MachineConfig object named 75-USDscan-name-USDsuite-name . That MachineConfig object is subsequently rendered by the Machine Config Operator and finally applied to all the nodes in a machine config pool by an instance of the machine control daemon running on each node. Note that when the Machine Config Operator applies a new MachineConfig object to nodes in a pool, all the nodes belonging to the pool are rebooted. This might be inconvenient when applying multiple remediations, each of which re-renders the composite 75-USDscan-name-USDsuite-name MachineConfig object. To prevent applying the remediation immediately, you can pause the machine config pool by setting the .spec.paused attribute of a MachineConfigPool object to true . The Compliance Operator can apply remediations automatically. Set autoApplyRemediations: true in the ScanSetting top-level object. Warning Applying remediations automatically should only be done with careful consideration. Important The Compliance Operator does not automatically resolve dependency issues that can occur between remediations. Users should perform a rescan after remediations are applied to ensure accurate results. 5.6.5.8. Remediating a platform check manually Checks for Platform scans typically have to be remediated manually by the administrator for two reasons: It is not always possible to automatically determine the value that must be set. One of the checks requires that a list of allowed registries is provided, but the scanner has no way of knowing which registries the organization wants to allow. Different checks modify different API objects, requiring automated remediation to possess root or superuser access to modify objects in the cluster, which is not advised. Procedure The example below uses the ocp4-ocp-allowed-registries-for-import rule, which would fail on a default OpenShift Container Platform installation. Inspect the rule oc get rule.compliance/ocp4-ocp-allowed-registries-for-import -oyaml , the rule is to limit the registries the users are allowed to import images from by setting the allowedRegistriesForImport attribute, The warning attribute of the rule also shows the API object checked, so it can be modified and remediate the issue: USD oc edit image.config.openshift.io/cluster Example output apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: "true" creationTimestamp: "2020-09-10T10:12:54Z" generation: 2 name: cluster resourceVersion: "363096" selfLink: /apis/config.openshift.io/v1/images/cluster uid: 2dcb614e-2f8a-4a23-ba9a-8e33cd0ff77e spec: allowedRegistriesForImport: - domainName: registry.redhat.io status: externalRegistryHostnames: - default-route-openshift-image-registry.apps.user-cluster-09-10-12-07.devcluster.openshift.com internalRegistryHostname: image-registry.openshift-image-registry.svc:5000 Re-run the scan: USD oc -n openshift-compliance \ annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan= 5.6.5.9. Updating remediations When a new version of compliance content is used, it might deliver a new and different version of a remediation than the version. The Compliance Operator will keep the old version of the remediation applied. The OpenShift Container Platform administrator is also notified of the new version to review and apply. A ComplianceRemediation object that had been applied earlier, but was updated changes its status to Outdated . The outdated objects are labeled so that they can be searched for easily. The previously applied remediation contents would then be stored in the spec.outdated attribute of a ComplianceRemediation object and the new updated contents would be stored in the spec.current attribute. After updating the content to a newer version, the administrator then needs to review the remediation. As long as the spec.outdated attribute exists, it would be used to render the resulting MachineConfig object. After the spec.outdated attribute is removed, the Compliance Operator re-renders the resulting MachineConfig object, which causes the Operator to push the configuration to the nodes. Procedure Search for any outdated remediations: USD oc -n openshift-compliance get complianceremediations \ -l complianceoperator.openshift.io/outdated-remediation= Example output NAME STATE workers-scan-no-empty-passwords Outdated The currently applied remediation is stored in the Outdated attribute and the new, unapplied remediation is stored in the Current attribute. If you are satisfied with the new version, remove the Outdated field. If you want to keep the updated content, remove the Current and Outdated attributes. Apply the newer version of the remediation: USD oc -n openshift-compliance patch complianceremediations workers-scan-no-empty-passwords \ --type json -p '[{"op":"remove", "path":/spec/outdated}]' The remediation state will switch from Outdated to Applied : USD oc get -n openshift-compliance complianceremediations workers-scan-no-empty-passwords Example output NAME STATE workers-scan-no-empty-passwords Applied The nodes will apply the newer remediation version and reboot. Important The Compliance Operator does not automatically resolve dependency issues that can occur between remediations. Users should perform a rescan after remediations are applied to ensure accurate results. 5.6.5.10. Unapplying a remediation It might be required to unapply a remediation that was previously applied. Procedure Set the apply flag to false : USD oc -n openshift-compliance \ patch complianceremediations/rhcos4-moderate-worker-sysctl-net-ipv4-conf-all-accept-redirects \ --patch '{"spec":{"apply":false}}' --type=merge The remediation status will change to NotApplied and the composite MachineConfig object would be re-rendered to not include the remediation. Important All affected nodes with the remediation will be rebooted. Important The Compliance Operator does not automatically resolve dependency issues that can occur between remediations. Users should perform a rescan after remediations are applied to ensure accurate results. 5.6.5.11. Removing a KubeletConfig remediation KubeletConfig remediations are included in node-level profiles. In order to remove a KubeletConfig remediation, you must manually remove it from the KubeletConfig objects. This example demonstrates how to remove the compliance check for the one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available remediation. Procedure Locate the scan-name and compliance check for the one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available remediation: USD oc -n openshift-compliance get remediation \ one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available -o yaml Example output apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceRemediation metadata: annotations: compliance.openshift.io/xccdf-value-used: var-kubelet-evictionhard-imagefs-available creationTimestamp: "2022-01-05T19:52:27Z" generation: 1 labels: compliance.openshift.io/scan-name: one-rule-tp-node-master 1 compliance.openshift.io/suite: one-rule-ssb-node name: one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ComplianceCheckResult name: one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available uid: fe8e1577-9060-4c59-95b2-3e2c51709adc resourceVersion: "84820" uid: 5339d21a-24d7-40cb-84d2-7a2ebb015355 spec: apply: true current: object: apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig spec: kubeletConfig: evictionHard: imagefs.available: 10% 2 outdated: {} type: Configuration status: applicationState: Applied 1 The scan name of the remediation. 2 The remediation that was added to the KubeletConfig objects. Note If the remediation invokes an evictionHard kubelet configuration, you must specify all of the evictionHard parameters: memory.available , nodefs.available , nodefs.inodesFree , imagefs.available , and imagefs.inodesFree . If you do not specify all parameters, only the specified parameters are applied and the remediation will not function properly. Remove the remediation: Set apply to false for the remediation object: USD oc -n openshift-compliance patch \ complianceremediations/one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available \ -p '{"spec":{"apply":false}}' --type=merge Using the scan-name , find the KubeletConfig object that the remediation was applied to: USD oc -n openshift-compliance get kubeletconfig \ --selector compliance.openshift.io/scan-name=one-rule-tp-node-master Example output NAME AGE compliance-operator-kubelet-master 2m34s Manually remove the remediation, imagefs.available: 10% , from the KubeletConfig object: USD oc edit -n openshift-compliance KubeletConfig compliance-operator-kubelet-master Important All affected nodes with the remediation will be rebooted. Note You must also exclude the rule from any scheduled scans in your tailored profiles that auto-applies the remediation, otherwise, the remediation will be re-applied during the scheduled scan. 5.6.5.12. Inconsistent ComplianceScan The ScanSetting object lists the node roles that the compliance scans generated from the ScanSetting or ScanSettingBinding objects would scan. Each node role usually maps to a machine config pool. Important It is expected that all machines in a machine config pool are identical and all scan results from the nodes in a pool should be identical. If some of the results are different from others, the Compliance Operator flags a ComplianceCheckResult object where some of the nodes will report as INCONSISTENT . All ComplianceCheckResult objects are also labeled with compliance.openshift.io/inconsistent-check . Because the number of machines in a pool might be quite large, the Compliance Operator attempts to find the most common state and list the nodes that differ from the common state. The most common state is stored in the compliance.openshift.io/most-common-status annotation and the annotation compliance.openshift.io/inconsistent-source contains pairs of hostname:status of check statuses that differ from the most common status. If no common state can be found, all the hostname:status pairs are listed in the compliance.openshift.io/inconsistent-source annotation . If possible, a remediation is still created so that the cluster can converge to a compliant status. However, this might not always be possible and correcting the difference between nodes must be done manually. The compliance scan must be re-run to get a consistent result by annotating the scan with the compliance.openshift.io/rescan= option: USD oc -n openshift-compliance \ annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan= 5.6.5.13. Additional resources Modifying nodes . 5.6.6. Performing advanced Compliance Operator tasks The Compliance Operator includes options for advanced users for the purpose of debugging or integration with existing tooling. 5.6.6.1. Using the ComplianceSuite and ComplianceScan objects directly While it is recommended that users take advantage of the ScanSetting and ScanSettingBinding objects to define the suites and scans, there are valid use cases to define the ComplianceSuite objects directly: Specifying only a single rule to scan. This can be useful for debugging together with the debug: true attribute which increases the OpenSCAP scanner verbosity, as the debug mode tends to get quite verbose otherwise. Limiting the test to one rule helps to lower the amount of debug information. Providing a custom nodeSelector. In order for a remediation to be applicable, the nodeSelector must match a pool. Pointing the Scan to a bespoke config map with a tailoring file. For testing or development when the overhead of parsing profiles from bundles is not required. The following example shows a ComplianceSuite that scans the worker machines with only a single rule: apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceSuite metadata: name: workers-compliancesuite spec: scans: - name: workers-scan profile: xccdf_org.ssgproject.content_profile_moderate content: ssg-rhcos4-ds.xml contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:45dc... debug: true rule: xccdf_org.ssgproject.content_rule_no_direct_root_logins nodeSelector: node-role.kubernetes.io/worker: "" The ComplianceSuite object and the ComplianceScan objects referred to above specify several attributes in a format that OpenSCAP expects. To find out the profile, content, or rule values, you can start by creating a similar Suite from ScanSetting and ScanSettingBinding or inspect the objects parsed from the ProfileBundle objects like rules or profiles. Those objects contain the xccdf_org identifiers you can use to refer to them from a ComplianceSuite . 5.6.6.2. Setting PriorityClass for ScanSetting scans In large scale environments, the default PriorityClass object can be too low to guarantee Pods execute scans on time. For clusters that must maintain compliance or guarantee automated scanning, it is recommended to set the PriorityClass variable to ensure the Compliance Operator is always given priority in resource constrained situations. Procedure Set the PriorityClass variable: apiVersion: compliance.openshift.io/v1alpha1 strictNodeScan: true metadata: name: default namespace: openshift-compliance priorityClass: compliance-high-priority 1 kind: ScanSetting showNotApplicable: false rawResultStorage: nodeSelector: node-role.kubernetes.io/master: '' pvAccessModes: - ReadWriteOnce rotation: 3 size: 1Gi tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists schedule: 0 1 * * * roles: - master - worker scanTolerations: - operator: Exists 1 If the PriorityClass referenced in the ScanSetting cannot be found, the Operator will leave the PriorityClass empty, issue a warning, and continue scheduling scans without a PriorityClass . 5.6.6.3. Using raw tailored profiles While the TailoredProfile CR enables the most common tailoring operations, the XCCDF standard allows even more flexibility in tailoring OpenSCAP profiles. In addition, if your organization has been using OpenScap previously, you may have an existing XCCDF tailoring file and can reuse it. The ComplianceSuite object contains an optional TailoringConfigMap attribute that you can point to a custom tailoring file. The value of the TailoringConfigMap attribute is a name of a config map which must contain a key called tailoring.xml and the value of this key is the tailoring contents. Procedure Create the ConfigMap object from a file: USD oc -n openshift-compliance \ create configmap nist-moderate-modified \ --from-file=tailoring.xml=/path/to/the/tailoringFile.xml Reference the tailoring file in a scan that belongs to a suite: apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceSuite metadata: name: workers-compliancesuite spec: debug: true scans: - name: workers-scan profile: xccdf_org.ssgproject.content_profile_moderate content: ssg-rhcos4-ds.xml contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:45dc... debug: true tailoringConfigMap: name: nist-moderate-modified nodeSelector: node-role.kubernetes.io/worker: "" 5.6.6.4. Performing a rescan Typically you will want to re-run a scan on a defined schedule, like every Monday or daily. It can also be useful to re-run a scan once after fixing a problem on a node. To perform a single scan, annotate the scan with the compliance.openshift.io/rescan= option: USD oc -n openshift-compliance \ annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan= A rescan generates four additional mc for rhcos-moderate profile: USD oc get mc Example output 75-worker-scan-chronyd-or-ntpd-specify-remote-server 75-worker-scan-configure-usbguard-auditbackend 75-worker-scan-service-usbguard-enabled 75-worker-scan-usbguard-allow-hid-and-hub Important When the scan setting default-auto-apply label is applied, remediations are applied automatically and outdated remediations automatically update. If there are remediations that were not applied due to dependencies, or remediations that had been outdated, rescanning applies the remediations and might trigger a reboot. Only remediations that use MachineConfig objects trigger reboots. If there are no updates or dependencies to be applied, no reboot occurs. 5.6.6.5. Setting custom storage size for results While the custom resources such as ComplianceCheckResult represent an aggregated result of one check across all scanned nodes, it can be useful to review the raw results as produced by the scanner. The raw results are produced in the ARF format and can be large (tens of megabytes per node), it is impractical to store them in a Kubernetes resource backed by the etcd key-value store. Instead, every scan creates a persistent volume (PV) which defaults to 1GB size. Depending on your environment, you may want to increase the PV size accordingly. This is done using the rawResultStorage.size attribute that is exposed in both the ScanSetting and ComplianceScan resources. A related parameter is rawResultStorage.rotation which controls how many scans are retained in the PV before the older scans are rotated. The default value is 3, setting the rotation policy to 0 disables the rotation. Given the default rotation policy and an estimate of 100MB per a raw ARF scan report, you can calculate the right PV size for your environment. 5.6.6.5.1. Using custom result storage values Because OpenShift Container Platform can be deployed in a variety of public clouds or bare metal, the Compliance Operator cannot determine available storage configurations. By default, the Compliance Operator will try to create the PV for storing results using the default storage class of the cluster, but a custom storage class can be configured using the rawResultStorage.StorageClassName attribute. Important If your cluster does not specify a default storage class, this attribute must be set. Configure the ScanSetting custom resource to use a standard storage class and create persistent volumes that are 10GB in size and keep the last 10 results: Example ScanSetting CR apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: default namespace: openshift-compliance rawResultStorage: storageClassName: standard rotation: 10 size: 10Gi roles: - worker - master scanTolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists schedule: '0 1 * * *' 5.6.6.6. Applying remediations generated by suite scans Although you can use the autoApplyRemediations boolean parameter in a ComplianceSuite object, you can alternatively annotate the object with compliance.openshift.io/apply-remediations . This allows the Operator to apply all of the created remediations. Procedure Apply the compliance.openshift.io/apply-remediations annotation by running: USD oc -n openshift-compliance \ annotate compliancesuites/workers-compliancesuite compliance.openshift.io/apply-remediations= 5.6.6.7. Automatically update remediations In some cases, a scan with newer content might mark remediations as OUTDATED . As an administrator, you can apply the compliance.openshift.io/remove-outdated annotation to apply new remediations and remove the outdated ones. Procedure Apply the compliance.openshift.io/remove-outdated annotation: USD oc -n openshift-compliance \ annotate compliancesuites/workers-compliancesuite compliance.openshift.io/remove-outdated= Alternatively, set the autoUpdateRemediations flag in a ScanSetting or ComplianceSuite object to update the remediations automatically. 5.6.6.8. Creating a custom SCC for the Compliance Operator In some environments, you must create a custom Security Context Constraints (SCC) file to ensure the correct permissions are available to the Compliance Operator api-resource-collector . Prerequisites You must have admin privileges. Procedure Define the SCC in a YAML file named restricted-adjusted-compliance.yaml : SecurityContextConstraints object definition allowHostDirVolumePlugin: false allowHostIPC: false allowHostNetwork: false allowHostPID: false allowHostPorts: false allowPrivilegeEscalation: true allowPrivilegedContainer: false allowedCapabilities: null apiVersion: security.openshift.io/v1 defaultAddCapabilities: null fsGroup: type: MustRunAs kind: SecurityContextConstraints metadata: name: restricted-adjusted-compliance priority: 30 1 readOnlyRootFilesystem: false requiredDropCapabilities: - KILL - SETUID - SETGID - MKNOD runAsUser: type: MustRunAsRange seLinuxContext: type: MustRunAs supplementalGroups: type: RunAsAny users: - system:serviceaccount:openshift-compliance:api-resource-collector 2 volumes: - configMap - downwardAPI - emptyDir - persistentVolumeClaim - projected - secret 1 The priority of this SCC must be higher than any other SCC that applies to the system:authenticated group. 2 Service Account used by Compliance Operator Scanner pod. Create the SCC: USD oc create -n openshift-compliance -f restricted-adjusted-compliance.yaml Example output securitycontextconstraints.security.openshift.io/restricted-adjusted-compliance created Verification Verify the SCC was created: USD oc get -n openshift-compliance scc restricted-adjusted-compliance Example output NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP PRIORITY READONLYROOTFS VOLUMES restricted-adjusted-compliance false <no value> MustRunAs MustRunAsRange MustRunAs RunAsAny 30 false ["configMap","downwardAPI","emptyDir","persistentVolumeClaim","projected","secret"] 5.6.6.9. Additional resources Managing security context constraints 5.6.7. Troubleshooting Compliance Operator scans This section describes how to troubleshoot the Compliance Operator. The information can be useful either to diagnose a problem or provide information in a bug report. Some general tips: The Compliance Operator emits Kubernetes events when something important happens. You can either view all events in the cluster using the command: USD oc get events -n openshift-compliance Or view events for an object like a scan using the command: USD oc describe -n openshift-compliance compliancescan/cis-compliance The Compliance Operator consists of several controllers, approximately one per API object. It could be useful to filter only those controllers that correspond to the API object having issues. If a ComplianceRemediation cannot be applied, view the messages from the remediationctrl controller. You can filter the messages from a single controller by parsing with jq : USD oc -n openshift-compliance logs compliance-operator-775d7bddbd-gj58f \ | jq -c 'select(.logger == "profilebundlectrl")' The timestamps are logged as seconds since UNIX epoch in UTC. To convert them to a human-readable date, use date -d @timestamp --utc , for example: USD date -d @1596184628.955853 --utc Many custom resources, most importantly ComplianceSuite and ScanSetting , allow the debug option to be set. Enabling this option increases verbosity of the OpenSCAP scanner pods, as well as some other helper pods. If a single rule is passing or failing unexpectedly, it could be helpful to run a single scan or a suite with only that rule to find the rule ID from the corresponding ComplianceCheckResult object and use it as the rule attribute value in a Scan CR. Then, together with the debug option enabled, the scanner container logs in the scanner pod would show the raw OpenSCAP logs. 5.6.7.1. Anatomy of a scan The following sections outline the components and stages of Compliance Operator scans. 5.6.7.1.1. Compliance sources The compliance content is stored in Profile objects that are generated from a ProfileBundle object. The Compliance Operator creates a ProfileBundle object for the cluster and another for the cluster nodes. USD oc get -n openshift-compliance profilebundle.compliance USD oc get -n openshift-compliance profile.compliance The ProfileBundle objects are processed by deployments labeled with the Bundle name. To troubleshoot an issue with the Bundle , you can find the deployment and view logs of the pods in a deployment: USD oc logs -n openshift-compliance -lprofile-bundle=ocp4 -c profileparser USD oc get -n openshift-compliance deployments,pods -lprofile-bundle=ocp4 USD oc logs -n openshift-compliance pods/<pod-name> USD oc describe -n openshift-compliance pod/<pod-name> -c profileparser 5.6.7.1.2. The ScanSetting and ScanSettingBinding objects lifecycle and debugging With valid compliance content sources, the high-level ScanSetting and ScanSettingBinding objects can be used to generate ComplianceSuite and ComplianceScan objects: apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: my-companys-constraints debug: true # For each role, a separate scan will be created pointing # to a node-role specified in roles roles: - worker --- apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: my-companys-compliance-requirements profiles: # Node checks - name: rhcos4-e8 kind: Profile apiGroup: compliance.openshift.io/v1alpha1 # Cluster checks - name: ocp4-e8 kind: Profile apiGroup: compliance.openshift.io/v1alpha1 settingsRef: name: my-companys-constraints kind: ScanSetting apiGroup: compliance.openshift.io/v1alpha1 Both ScanSetting and ScanSettingBinding objects are handled by the same controller tagged with logger=scansettingbindingctrl . These objects have no status. Any issues are communicated in form of events: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuiteCreated 9m52s scansettingbindingctrl ComplianceSuite openshift-compliance/my-companys-compliance-requirements created Now a ComplianceSuite object is created. The flow continues to reconcile the newly created ComplianceSuite . 5.6.7.1.3. ComplianceSuite custom resource lifecycle and debugging The ComplianceSuite CR is a wrapper around ComplianceScan CRs. The ComplianceSuite CR is handled by controller tagged with logger=suitectrl . This controller handles creating scans from a suite, reconciling and aggregating individual Scan statuses into a single Suite status. If a suite is set to execute periodically, the suitectrl also handles creating a CronJob CR that re-runs the scans in the suite after the initial run is done: USD oc get cronjobs Example output NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE <cron_name> 0 1 * * * False 0 <none> 151m For the most important issues, events are emitted. View them with oc describe compliancesuites/<name> . The Suite objects also have a Status subresource that is updated when any of Scan objects that belong to this suite update their Status subresource. After all expected scans are created, control is passed to the scan controller. 5.6.7.1.4. ComplianceScan custom resource lifecycle and debugging The ComplianceScan CRs are handled by the scanctrl controller. This is also where the actual scans happen and the scan results are created. Each scan goes through several phases: 5.6.7.1.4.1. Pending phase The scan is validated for correctness in this phase. If some parameters like storage size are invalid, the scan transitions to DONE with ERROR result, otherwise proceeds to the Launching phase. 5.6.7.1.4.2. Launching phase In this phase, several config maps that contain either environment for the scanner pods or directly the script that the scanner pods will be evaluating. List the config maps: USD oc -n openshift-compliance get cm \ -l compliance.openshift.io/scan-name=rhcos4-e8-worker,complianceoperator.openshift.io/scan-script= These config maps will be used by the scanner pods. If you ever needed to modify the scanner behavior, change the scanner debug level or print the raw results, modifying the config maps is the way to go. Afterwards, a persistent volume claim is created per scan to store the raw ARF results: USD oc get pvc -n openshift-compliance -lcompliance.openshift.io/scan-name=rhcos4-e8-worker The PVCs are mounted by a per-scan ResultServer deployment. A ResultServer is a simple HTTP server where the individual scanner pods upload the full ARF results to. Each server can run on a different node. The full ARF results might be very large and you cannot presume that it would be possible to create a volume that could be mounted from multiple nodes at the same time. After the scan is finished, the ResultServer deployment is scaled down. The PVC with the raw results can be mounted from another custom pod and the results can be fetched or inspected. The traffic between the scanner pods and the ResultServer is protected by mutual TLS protocols. Finally, the scanner pods are launched in this phase; one scanner pod for a Platform scan instance and one scanner pod per matching node for a node scan instance. The per-node pods are labeled with the node name. Each pod is always labeled with the ComplianceScan name: USD oc get pods -lcompliance.openshift.io/scan-name=rhcos4-e8-worker,workload=scanner --show-labels Example output NAME READY STATUS RESTARTS AGE LABELS rhcos4-e8-worker-ip-10-0-169-90.eu-north-1.compute.internal-pod 0/2 Completed 0 39m compliance.openshift.io/scan-name=rhcos4-e8-worker,targetNode=ip-10-0-169-90.eu-north-1.compute.internal,workload=scanner + The scan then proceeds to the Running phase. 5.6.7.1.4.3. Running phase The running phase waits until the scanner pods finish. The following terms and processes are in use in the running phase: init container : There is one init container called content-container . It runs the contentImage container and executes a single command that copies the contentFile to the /content directory shared with the other containers in this pod. scanner : This container runs the scan. For node scans, the container mounts the node filesystem as /host and mounts the content delivered by the init container. The container also mounts the entrypoint ConfigMap created in the Launching phase and executes it. The default script in the entrypoint ConfigMap executes OpenSCAP and stores the result files in the /results directory shared between the pod's containers. Logs from this pod can be viewed to determine what the OpenSCAP scanner checked. More verbose output can be viewed with the debug flag. logcollector : The logcollector container waits until the scanner container finishes. Then, it uploads the full ARF results to the ResultServer and separately uploads the XCCDF results along with scan result and OpenSCAP result code as a ConfigMap. These result config maps are labeled with the scan name ( compliance.openshift.io/scan-name=rhcos4-e8-worker ): USD oc describe cm/rhcos4-e8-worker-ip-10-0-169-90.eu-north-1.compute.internal-pod Example output Name: rhcos4-e8-worker-ip-10-0-169-90.eu-north-1.compute.internal-pod Namespace: openshift-compliance Labels: compliance.openshift.io/scan-name-scan=rhcos4-e8-worker complianceoperator.openshift.io/scan-result= Annotations: compliance-remediations/processed: compliance.openshift.io/scan-error-msg: compliance.openshift.io/scan-result: NON-COMPLIANT OpenSCAP-scan-result/node: ip-10-0-169-90.eu-north-1.compute.internal Data ==== exit-code: ---- 2 results: ---- <?xml version="1.0" encoding="UTF-8"?> ... Scanner pods for Platform scans are similar, except: There is one extra init container called api-resource-collector that reads the OpenSCAP content provided by the content-container init, container, figures out which API resources the content needs to examine and stores those API resources to a shared directory where the scanner container would read them from. The scanner container does not need to mount the host file system. When the scanner pods are done, the scans move on to the Aggregating phase. 5.6.7.1.4.4. Aggregating phase In the aggregating phase, the scan controller spawns yet another pod called the aggregator pod. Its purpose it to take the result ConfigMap objects, read the results and for each check result create the corresponding Kubernetes object. If the check failure can be automatically remediated, a ComplianceRemediation object is created. To provide human-readable metadata for the checks and remediations, the aggregator pod also mounts the OpenSCAP content using an init container. When a config map is processed by an aggregator pod, it is labeled the compliance-remediations/processed label. The result of this phase are ComplianceCheckResult objects: USD oc get compliancecheckresults -lcompliance.openshift.io/scan-name=rhcos4-e8-worker Example output NAME STATUS SEVERITY rhcos4-e8-worker-accounts-no-uid-except-zero PASS high rhcos4-e8-worker-audit-rules-dac-modification-chmod FAIL medium and ComplianceRemediation objects: USD oc get complianceremediations -lcompliance.openshift.io/scan-name=rhcos4-e8-worker Example output NAME STATE rhcos4-e8-worker-audit-rules-dac-modification-chmod NotApplied rhcos4-e8-worker-audit-rules-dac-modification-chown NotApplied rhcos4-e8-worker-audit-rules-execution-chcon NotApplied rhcos4-e8-worker-audit-rules-execution-restorecon NotApplied rhcos4-e8-worker-audit-rules-execution-semanage NotApplied rhcos4-e8-worker-audit-rules-execution-setfiles NotApplied After these CRs are created, the aggregator pod exits and the scan moves on to the Done phase. 5.6.7.1.4.5. Done phase In the final scan phase, the scan resources are cleaned up if needed and the ResultServer deployment is either scaled down (if the scan was one-time) or deleted if the scan is continuous; the scan instance would then recreate the deployment again. It is also possible to trigger a re-run of a scan in the Done phase by annotating it: USD oc -n openshift-compliance \ annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan= After the scan reaches the Done phase, nothing else happens on its own unless the remediations are set to be applied automatically with autoApplyRemediations: true . The OpenShift Container Platform administrator would now review the remediations and apply them as needed. If the remediations are set to be applied automatically, the ComplianceSuite controller takes over in the Done phase, pauses the machine config pool to which the scan maps to and applies all the remediations in one go. If a remediation is applied, the ComplianceRemediation controller takes over. 5.6.7.1.5. ComplianceRemediation controller lifecycle and debugging The example scan has reported some findings. One of the remediations can be enabled by toggling its apply attribute to true : USD oc patch complianceremediations/rhcos4-e8-worker-audit-rules-dac-modification-chmod --patch '{"spec":{"apply":true}}' --type=merge The ComplianceRemediation controller ( logger=remediationctrl ) reconciles the modified object. The result of the reconciliation is change of status of the remediation object that is reconciled, but also a change of the rendered per-suite MachineConfig object that contains all the applied remediations. The MachineConfig object always begins with 75- and is named after the scan and the suite: USD oc get mc | grep 75- Example output 75-rhcos4-e8-worker-my-companys-compliance-requirements 3.2.0 2m46s The remediations the mc currently consists of are listed in the machine config's annotations: USD oc describe mc/75-rhcos4-e8-worker-my-companys-compliance-requirements Example output Name: 75-rhcos4-e8-worker-my-companys-compliance-requirements Labels: machineconfiguration.openshift.io/role=worker Annotations: remediation/rhcos4-e8-worker-audit-rules-dac-modification-chmod: The ComplianceRemediation controller's algorithm works like this: All currently applied remediations are read into an initial remediation set. If the reconciled remediation is supposed to be applied, it is added to the set. A MachineConfig object is rendered from the set and annotated with names of remediations in the set. If the set is empty (the last remediation was unapplied), the rendered MachineConfig object is removed. If and only if the rendered machine config is different from the one already applied in the cluster, the applied MC is updated (or created, or deleted). Creating or modifying a MachineConfig object triggers a reboot of nodes that match the machineconfiguration.openshift.io/role label - see the Machine Config Operator documentation for more details. The remediation loop ends once the rendered machine config is updated, if needed, and the reconciled remediation object status is updated. In our case, applying the remediation would trigger a reboot. After the reboot, annotate the scan to re-run it: USD oc -n openshift-compliance \ annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan= The scan will run and finish. Check for the remediation to pass: USD oc -n openshift-compliance \ get compliancecheckresults/rhcos4-e8-worker-audit-rules-dac-modification-chmod Example output NAME STATUS SEVERITY rhcos4-e8-worker-audit-rules-dac-modification-chmod PASS medium 5.6.7.1.6. Useful labels Each pod that is spawned by the Compliance Operator is labeled specifically with the scan it belongs to and the work it does. The scan identifier is labeled with the compliance.openshift.io/scan-name label. The workload identifier is labeled with the workload label. The Compliance Operator schedules the following workloads: scanner : Performs the compliance scan. resultserver : Stores the raw results for the compliance scan. aggregator : Aggregates the results, detects inconsistencies and outputs result objects (checkresults and remediations). suitererunner : Will tag a suite to be re-run (when a schedule is set). profileparser : Parses a datastream and creates the appropriate profiles, rules and variables. When debugging and logs are required for a certain workload, run: USD oc logs -l workload=<workload_name> -c <container_name> 5.6.7.2. Increasing Compliance Operator resource limits In some cases, the Compliance Operator might require more memory than the default limits allow. The best way to mitigate this issue is to set custom resource limits. To increase the default memory and CPU limits of scanner pods, see `ScanSetting` Custom resource . Procedure To increase the Operator's memory limits to 500 Mi, create the following patch file named co-memlimit-patch.yaml : spec: config: resources: limits: memory: 500Mi Apply the patch file: USD oc patch sub compliance-operator -nopenshift-compliance --patch-file co-memlimit-patch.yaml --type=merge 5.6.7.3. Configuring Operator resource constraints The resources field defines Resource Constraints for all the containers in the Pod created by the Operator Lifecycle Manager (OLM). Note Resource Constraints applied in this process overwrites the existing resource constraints. Procedure Inject a request of 0.25 cpu and 64 Mi of memory, and a limit of 0.5 cpu and 128 Mi of memory in each container by editing the Subscription object: kind: Subscription metadata: name: compliance-operator namespace: openshift-compliance spec: package: package-name channel: stable config: resources: requests: memory: "64Mi" cpu: "250m" limits: memory: "128Mi" cpu: "500m" 5.6.7.4. Configuring ScanSetting resources When using the Compliance Operator in a cluster that contains more than 500 MachineConfigs, the ocp4-pci-dss-api-checks-pod pod may pause in the init phase when performing a Platform scan. Note Resource constraints applied in this process overwrites the existing resource constraints. Procedure Confirm the ocp4-pci-dss-api-checks-pod pod is stuck in the Init:OOMKilled status: USD oc get pod ocp4-pci-dss-api-checks-pod -w Example output NAME READY STATUS RESTARTS AGE ocp4-pci-dss-api-checks-pod 0/2 Init:1/2 8 (5m56s ago) 25m ocp4-pci-dss-api-checks-pod 0/2 Init:OOMKilled 8 (6m19s ago) 26m Edit the scanLimits attribute in the ScanSetting CR to increase the available memory for the ocp4-pci-dss-api-checks-pod pod: timeout: 30m strictNodeScan: true metadata: name: default namespace: openshift-compliance kind: ScanSetting showNotApplicable: false rawResultStorage: nodeSelector: node-role.kubernetes.io/master: '' pvAccessModes: - ReadWriteOnce rotation: 3 size: 1Gi tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists schedule: 0 1 * * * roles: - master - worker apiVersion: compliance.openshift.io/v1alpha1 maxRetryOnTimeout: 3 scanTolerations: - operator: Exists scanLimits: memory: 1024Mi 1 1 The default setting is 500Mi . Apply the ScanSetting CR to your cluster: USD oc apply -f scansetting.yaml 5.6.7.5. Configuring ScanSetting timeout The ScanSetting object has a timeout option that can be specified in the ComplianceScanSetting object as a duration string, such as 1h30m . If the scan does not finish within the specified timeout, the scan reattempts until the maxRetryOnTimeout limit is reached. Procedure To set a timeout and maxRetryOnTimeout in ScanSetting, modify an existing ScanSetting object: apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: default namespace: openshift-compliance rawResultStorage: rotation: 3 size: 1Gi roles: - worker - master scanTolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists schedule: '0 1 * * *' timeout: '10m0s' 1 maxRetryOnTimeout: 3 2 1 The timeout variable is defined as a duration string, such as 1h30m . The default value is 30m . To disable the timeout, set the value to 0s . 2 The maxRetryOnTimeout variable defines how many times a retry is attempted. The default value is 3 . 5.6.7.6. Getting support If you experience difficulty with a procedure described in this documentation, or with OpenShift Container Platform in general, visit the Red Hat Customer Portal . From the Customer Portal, you can: Search or browse through the Red Hat Knowledgebase of articles and solutions relating to Red Hat products. Submit a support case to Red Hat Support. Access other product documentation. To identify issues with your cluster, you can use Insights in OpenShift Cluster Manager . Insights provides details about issues and, if available, information on how to solve a problem. If you have a suggestion for improving this documentation or have found an error, submit a Jira issue for the most relevant documentation component. Please provide specific details, such as the section name and OpenShift Container Platform version. 5.6.8. Using the oc-compliance plugin Although the Compliance Operator automates many of the checks and remediations for the cluster, the full process of bringing a cluster into compliance often requires administrator interaction with the Compliance Operator API and other components. The oc-compliance plugin makes the process easier. 5.6.8.1. Installing the oc-compliance plugin Procedure Extract the oc-compliance image to get the oc-compliance binary: USD podman run --rm -v ~/.local/bin:/mnt/out:Z registry.redhat.io/compliance/oc-compliance-rhel8:stable /bin/cp /usr/bin/oc-compliance /mnt/out/ Example output W0611 20:35:46.486903 11354 manifest.go:440] Chose linux/amd64 manifest from the manifest list. You can now run oc-compliance . 5.6.8.2. Fetching raw results When a compliance scan finishes, the results of the individual checks are listed in the resulting ComplianceCheckResult custom resource (CR). However, an administrator or auditor might require the complete details of the scan. The OpenSCAP tool creates an Advanced Recording Format (ARF) formatted file with the detailed results. This ARF file is too large to store in a config map or other standard Kubernetes resource, so a persistent volume (PV) is created to contain it. Procedure Fetching the results from the PV with the Compliance Operator is a four-step process. However, with the oc-compliance plugin, you can use a single command: USD oc compliance fetch-raw <object-type> <object-name> -o <output-path> <object-type> can be either scansettingbinding , compliancescan or compliancesuite , depending on which of these objects the scans were launched with. <object-name> is the name of the binding, suite, or scan object to gather the ARF file for, and <output-path> is the local directory to place the results. For example: USD oc compliance fetch-raw scansettingbindings my-binding -o /tmp/ Example output Fetching results for my-binding scans: ocp4-cis, ocp4-cis-node-worker, ocp4-cis-node-master Fetching raw compliance results for scan 'ocp4-cis'....... The raw compliance results are available in the following directory: /tmp/ocp4-cis Fetching raw compliance results for scan 'ocp4-cis-node-worker'........... The raw compliance results are available in the following directory: /tmp/ocp4-cis-node-worker Fetching raw compliance results for scan 'ocp4-cis-node-master'...... The raw compliance results are available in the following directory: /tmp/ocp4-cis-node-master View the list of files in the directory: USD ls /tmp/ocp4-cis-node-master/ Example output ocp4-cis-node-master-ip-10-0-128-89.ec2.internal-pod.xml.bzip2 ocp4-cis-node-master-ip-10-0-150-5.ec2.internal-pod.xml.bzip2 ocp4-cis-node-master-ip-10-0-163-32.ec2.internal-pod.xml.bzip2 Extract the results: USD bunzip2 -c resultsdir/worker-scan/worker-scan-stage-459-tqkg7-compute-0-pod.xml.bzip2 > resultsdir/worker-scan/worker-scan-ip-10-0-170-231.us-east-2.compute.internal-pod.xml View the results: USD ls resultsdir/worker-scan/ Example output worker-scan-ip-10-0-170-231.us-east-2.compute.internal-pod.xml worker-scan-stage-459-tqkg7-compute-0-pod.xml.bzip2 worker-scan-stage-459-tqkg7-compute-1-pod.xml.bzip2 5.6.8.3. Re-running scans Although it is possible to run scans as scheduled jobs, you must often re-run a scan on demand, particularly after remediations are applied or when other changes to the cluster are made. Procedure Rerunning a scan with the Compliance Operator requires use of an annotation on the scan object. However, with the oc-compliance plugin you can rerun a scan with a single command. Enter the following command to rerun the scans for the ScanSettingBinding object named my-binding : USD oc compliance rerun-now scansettingbindings my-binding Example output Rerunning scans from 'my-binding': ocp4-cis Re-running scan 'openshift-compliance/ocp4-cis' 5.6.8.4. Using ScanSettingBinding custom resources When using the ScanSetting and ScanSettingBinding custom resources (CRs) that the Compliance Operator provides, it is possible to run scans for multiple profiles while using a common set of scan options, such as schedule , machine roles , tolerations , and so on. While that is easier than working with multiple ComplianceSuite or ComplianceScan objects, it can confuse new users. The oc compliance bind subcommand helps you create a ScanSettingBinding CR. Procedure Run: USD oc compliance bind [--dry-run] -N <binding name> [-S <scansetting name>] <objtype/objname> [..<objtype/objname>] If you omit the -S flag, the default scan setting provided by the Compliance Operator is used. The object type is the Kubernetes object type, which can be profile or tailoredprofile . More than one object can be provided. The object name is the name of the Kubernetes resource, such as .metadata.name . Add the --dry-run option to display the YAML file of the objects that are created. For example, given the following profiles and scan settings: USD oc get profile.compliance -n openshift-compliance Example output NAME AGE VERSION ocp4-cis 3h49m 1.5.0 ocp4-cis-1-4 3h49m 1.4.0 ocp4-cis-1-5 3h49m 1.5.0 ocp4-cis-node 3h49m 1.5.0 ocp4-cis-node-1-4 3h49m 1.4.0 ocp4-cis-node-1-5 3h49m 1.5.0 ocp4-e8 3h49m ocp4-high 3h49m Revision 4 ocp4-high-node 3h49m Revision 4 ocp4-high-node-rev-4 3h49m Revision 4 ocp4-high-rev-4 3h49m Revision 4 ocp4-moderate 3h49m Revision 4 ocp4-moderate-node 3h49m Revision 4 ocp4-moderate-node-rev-4 3h49m Revision 4 ocp4-moderate-rev-4 3h49m Revision 4 ocp4-nerc-cip 3h49m ocp4-nerc-cip-node 3h49m ocp4-pci-dss 3h49m 3.2.1 ocp4-pci-dss-3-2 3h49m 3.2.1 ocp4-pci-dss-4-0 3h49m 4.0.0 ocp4-pci-dss-node 3h49m 3.2.1 ocp4-pci-dss-node-3-2 3h49m 3.2.1 ocp4-pci-dss-node-4-0 3h49m 4.0.0 ocp4-stig 3h49m V2R1 ocp4-stig-node 3h49m V2R1 ocp4-stig-node-v1r1 3h49m V1R1 ocp4-stig-node-v2r1 3h49m V2R1 ocp4-stig-v1r1 3h49m V1R1 ocp4-stig-v2r1 3h49m V2R1 rhcos4-e8 3h49m rhcos4-high 3h49m Revision 4 rhcos4-high-rev-4 3h49m Revision 4 rhcos4-moderate 3h49m Revision 4 rhcos4-moderate-rev-4 3h49m Revision 4 rhcos4-nerc-cip 3h49m rhcos4-stig 3h49m V2R1 rhcos4-stig-v1r1 3h49m V1R1 rhcos4-stig-v2r1 3h49m V2R1 USD oc get scansettings -n openshift-compliance Example output NAME AGE default 10m default-auto-apply 10m To apply the default settings to the ocp4-cis and ocp4-cis-node profiles, run: USD oc compliance bind -N my-binding profile/ocp4-cis profile/ocp4-cis-node Example output Creating ScanSettingBinding my-binding After the ScanSettingBinding CR is created, the bound profile begins scanning for both profiles with the related settings. Overall, this is the fastest way to begin scanning with the Compliance Operator. 5.6.8.5. Printing controls Compliance standards are generally organized into a hierarchy as follows: A benchmark is the top-level definition of a set of controls for a particular standard. For example, FedRAMP Moderate or Center for Internet Security (CIS) v.1.6.0. A control describes a family of requirements that must be met in order to be in compliance with the benchmark. For example, FedRAMP AC-01 (access control policy and procedures). A rule is a single check that is specific for the system being brought into compliance, and one or more of these rules map to a control. The Compliance Operator handles the grouping of rules into a profile for a single benchmark. It can be difficult to determine which controls that the set of rules in a profile satisfy. Procedure The oc compliance controls subcommand provides a report of the standards and controls that a given profile satisfies: USD oc compliance controls profile ocp4-cis-node Example output +-----------+----------+ | FRAMEWORK | CONTROLS | +-----------+----------+ | CIS-OCP | 1.1.1 | + +----------+ | | 1.1.10 | + +----------+ | | 1.1.11 | + +----------+ ... 5.6.8.6. Fetching compliance remediation details The Compliance Operator provides remediation objects that are used to automate the changes required to make the cluster compliant. The fetch-fixes subcommand can help you understand exactly which configuration remediations are used. Use the fetch-fixes subcommand to extract the remediation objects from a profile, rule, or ComplianceRemediation object into a directory to inspect. Procedure View the remediations for a profile: USD oc compliance fetch-fixes profile ocp4-cis -o /tmp Example output No fixes to persist for rule 'ocp4-api-server-api-priority-flowschema-catch-all' 1 No fixes to persist for rule 'ocp4-api-server-api-priority-gate-enabled' No fixes to persist for rule 'ocp4-api-server-audit-log-maxbackup' Persisted rule fix to /tmp/ocp4-api-server-audit-log-maxsize.yaml No fixes to persist for rule 'ocp4-api-server-audit-log-path' No fixes to persist for rule 'ocp4-api-server-auth-mode-no-aa' No fixes to persist for rule 'ocp4-api-server-auth-mode-node' No fixes to persist for rule 'ocp4-api-server-auth-mode-rbac' No fixes to persist for rule 'ocp4-api-server-basic-auth' No fixes to persist for rule 'ocp4-api-server-bind-address' No fixes to persist for rule 'ocp4-api-server-client-ca' Persisted rule fix to /tmp/ocp4-api-server-encryption-provider-cipher.yaml Persisted rule fix to /tmp/ocp4-api-server-encryption-provider-config.yaml 1 The No fixes to persist warning is expected whenever there are rules in a profile that do not have a corresponding remediation, because either the rule cannot be remediated automatically or a remediation was not provided. You can view a sample of the YAML file. The head command will show you the first 10 lines: USD head /tmp/ocp4-api-server-audit-log-maxsize.yaml Example output apiVersion: config.openshift.io/v1 kind: APIServer metadata: name: cluster spec: maximumFileSizeMegabytes: 100 View the remediation from a ComplianceRemediation object created after a scan: USD oc get complianceremediations -n openshift-compliance Example output NAME STATE ocp4-cis-api-server-encryption-provider-cipher NotApplied ocp4-cis-api-server-encryption-provider-config NotApplied USD oc compliance fetch-fixes complianceremediations ocp4-cis-api-server-encryption-provider-cipher -o /tmp Example output Persisted compliance remediation fix to /tmp/ocp4-cis-api-server-encryption-provider-cipher.yaml You can view a sample of the YAML file. The head command will show you the first 10 lines: USD head /tmp/ocp4-cis-api-server-encryption-provider-cipher.yaml Example output apiVersion: config.openshift.io/v1 kind: APIServer metadata: name: cluster spec: encryption: type: aescbc Warning Use caution before applying remediations directly. Some remediations might not be applicable in bulk, such as the usbguard rules in the moderate profile. In these cases, allow the Compliance Operator to apply the rules because it addresses the dependencies and ensures that the cluster remains in a good state. 5.6.8.7. Viewing ComplianceCheckResult object details When scans are finished running, ComplianceCheckResult objects are created for the individual scan rules. The view-result subcommand provides a human-readable output of the ComplianceCheckResult object details. Procedure Run: USD oc compliance view-result ocp4-cis-scheduler-no-bind-address
[ "oc delete pods -l compliance.openshift.io/scan-name=ocp4-cis", "oc delete pods -l compliance.openshift.io/scan-name=ocp4-cis", "oc adm must-gather --image=USD(oc get csv compliance-operator.v1.6.0 -o=jsonpath='{.spec.relatedImages[?(@.name==\"must-gather\")].image}')", "oc get profile.compliance -n openshift-compliance", "NAME AGE VERSION ocp4-cis 3h49m 1.5.0 ocp4-cis-1-4 3h49m 1.4.0 ocp4-cis-1-5 3h49m 1.5.0 ocp4-cis-node 3h49m 1.5.0 ocp4-cis-node-1-4 3h49m 1.4.0 ocp4-cis-node-1-5 3h49m 1.5.0 ocp4-e8 3h49m ocp4-high 3h49m Revision 4 ocp4-high-node 3h49m Revision 4 ocp4-high-node-rev-4 3h49m Revision 4 ocp4-high-rev-4 3h49m Revision 4 ocp4-moderate 3h49m Revision 4 ocp4-moderate-node 3h49m Revision 4 ocp4-moderate-node-rev-4 3h49m Revision 4 ocp4-moderate-rev-4 3h49m Revision 4 ocp4-nerc-cip 3h49m ocp4-nerc-cip-node 3h49m ocp4-pci-dss 3h49m 3.2.1 ocp4-pci-dss-3-2 3h49m 3.2.1 ocp4-pci-dss-4-0 3h49m 4.0.0 ocp4-pci-dss-node 3h49m 3.2.1 ocp4-pci-dss-node-3-2 3h49m 3.2.1 ocp4-pci-dss-node-4-0 3h49m 4.0.0 ocp4-stig 3h49m V2R1 ocp4-stig-node 3h49m V2R1 ocp4-stig-node-v1r1 3h49m V1R1 ocp4-stig-node-v2r1 3h49m V2R1 ocp4-stig-v1r1 3h49m V1R1 ocp4-stig-v2r1 3h49m V2R1 rhcos4-e8 3h49m rhcos4-high 3h49m Revision 4 rhcos4-high-rev-4 3h49m Revision 4 rhcos4-moderate 3h49m Revision 4 rhcos4-moderate-rev-4 3h49m Revision 4 rhcos4-nerc-cip 3h49m rhcos4-stig 3h49m V2R1 rhcos4-stig-v1r1 3h49m V1R1 rhcos4-stig-v2r1 3h49m V2R1", "oc get -n openshift-compliance -oyaml profiles.compliance rhcos4-e8", "apiVersion: compliance.openshift.io/v1alpha1 description: 'This profile contains configuration checks for Red Hat Enterprise Linux CoreOS that align to the Australian Cyber Security Centre (ACSC) Essential Eight. A copy of the Essential Eight in Linux Environments guide can be found at the ACSC website: https://www.cyber.gov.au/acsc/view-all-content/publications/hardening-linux-workstations-and-servers' id: xccdf_org.ssgproject.content_profile_e8 kind: Profile metadata: annotations: compliance.openshift.io/image-digest: pb-rhcos4hrdkm compliance.openshift.io/product: redhat_enterprise_linux_coreos_4 compliance.openshift.io/product-type: Node creationTimestamp: \"2022-10-19T12:06:49Z\" generation: 1 labels: compliance.openshift.io/profile-bundle: rhcos4 name: rhcos4-e8 namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ProfileBundle name: rhcos4 uid: 22350850-af4a-4f5c-9a42-5e7b68b82d7d resourceVersion: \"43699\" uid: 86353f70-28f7-40b4-bf0e-6289ec33675b rules: - rhcos4-accounts-no-uid-except-zero - rhcos4-audit-rules-dac-modification-chmod - rhcos4-audit-rules-dac-modification-chown - rhcos4-audit-rules-execution-chcon - rhcos4-audit-rules-execution-restorecon - rhcos4-audit-rules-execution-semanage - rhcos4-audit-rules-execution-setfiles - rhcos4-audit-rules-execution-setsebool - rhcos4-audit-rules-execution-seunshare - rhcos4-audit-rules-kernel-module-loading-delete - rhcos4-audit-rules-kernel-module-loading-finit - rhcos4-audit-rules-kernel-module-loading-init - rhcos4-audit-rules-login-events - rhcos4-audit-rules-login-events-faillock - rhcos4-audit-rules-login-events-lastlog - rhcos4-audit-rules-login-events-tallylog - rhcos4-audit-rules-networkconfig-modification - rhcos4-audit-rules-sysadmin-actions - rhcos4-audit-rules-time-adjtimex - rhcos4-audit-rules-time-clock-settime - rhcos4-audit-rules-time-settimeofday - rhcos4-audit-rules-time-stime - rhcos4-audit-rules-time-watch-localtime - rhcos4-audit-rules-usergroup-modification - rhcos4-auditd-data-retention-flush - rhcos4-auditd-freq - rhcos4-auditd-local-events - rhcos4-auditd-log-format - rhcos4-auditd-name-format - rhcos4-auditd-write-logs - rhcos4-configure-crypto-policy - rhcos4-configure-ssh-crypto-policy - rhcos4-no-empty-passwords - rhcos4-selinux-policytype - rhcos4-selinux-state - rhcos4-service-auditd-enabled - rhcos4-sshd-disable-empty-passwords - rhcos4-sshd-disable-gssapi-auth - rhcos4-sshd-disable-rhosts - rhcos4-sshd-disable-root-login - rhcos4-sshd-disable-user-known-hosts - rhcos4-sshd-do-not-permit-user-env - rhcos4-sshd-enable-strictmodes - rhcos4-sshd-print-last-log - rhcos4-sshd-set-loglevel-info - rhcos4-sysctl-kernel-dmesg-restrict - rhcos4-sysctl-kernel-kptr-restrict - rhcos4-sysctl-kernel-randomize-va-space - rhcos4-sysctl-kernel-unprivileged-bpf-disabled - rhcos4-sysctl-kernel-yama-ptrace-scope - rhcos4-sysctl-net-core-bpf-jit-harden title: Australian Cyber Security Centre (ACSC) Essential Eight", "oc get -n openshift-compliance -oyaml rules rhcos4-audit-rules-login-events", "apiVersion: compliance.openshift.io/v1alpha1 checkType: Node description: |- The audit system already collects login information for all users and root. If the auditd daemon is configured to use the augenrules program to read audit rules during daemon startup (the default), add the following lines to a file with suffix.rules in the directory /etc/audit/rules.d in order to watch for attempted manual edits of files involved in storing logon events: -w /var/log/tallylog -p wa -k logins -w /var/run/faillock -p wa -k logins -w /var/log/lastlog -p wa -k logins If the auditd daemon is configured to use the auditctl utility to read audit rules during daemon startup, add the following lines to /etc/audit/audit.rules file in order to watch for unattempted manual edits of files involved in storing logon events: -w /var/log/tallylog -p wa -k logins -w /var/run/faillock -p wa -k logins -w /var/log/lastlog -p wa -k logins id: xccdf_org.ssgproject.content_rule_audit_rules_login_events kind: Rule metadata: annotations: compliance.openshift.io/image-digest: pb-rhcos4hrdkm compliance.openshift.io/rule: audit-rules-login-events control.compliance.openshift.io/NIST-800-53: AU-2(d);AU-12(c);AC-6(9);CM-6(a) control.compliance.openshift.io/PCI-DSS: Req-10.2.3 policies.open-cluster-management.io/controls: AU-2(d),AU-12(c),AC-6(9),CM-6(a),Req-10.2.3 policies.open-cluster-management.io/standards: NIST-800-53,PCI-DSS creationTimestamp: \"2022-10-19T12:07:08Z\" generation: 1 labels: compliance.openshift.io/profile-bundle: rhcos4 name: rhcos4-audit-rules-login-events namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ProfileBundle name: rhcos4 uid: 22350850-af4a-4f5c-9a42-5e7b68b82d7d resourceVersion: \"44819\" uid: 75872f1f-3c93-40ca-a69d-44e5438824a4 rationale: Manual editing of these files may indicate nefarious activity, such as an attacker attempting to remove evidence of an intrusion. severity: medium title: Record Attempts to Alter Logon and Logout Events warning: Manual editing of these files may indicate nefarious activity, such as an attacker attempting to remove evidence of an intrusion.", "apiVersion: compliance.openshift.io/v1alpha1 kind: ProfileBundle name: <profile bundle name> namespace: openshift-compliance status: dataStreamStatus: VALID 1", "apiVersion: compliance.openshift.io/v1alpha1 description: <description of the profile> id: xccdf_org.ssgproject.content_profile_moderate 1 kind: Profile metadata: annotations: compliance.openshift.io/product: <product name> compliance.openshift.io/product-type: Node 2 creationTimestamp: \"YYYY-MM-DDTMM:HH:SSZ\" generation: 1 labels: compliance.openshift.io/profile-bundle: <profile bundle name> name: rhcos4-moderate namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ProfileBundle name: <profile bundle name> uid: <uid string> resourceVersion: \"<version number>\" selfLink: /apis/compliance.openshift.io/v1alpha1/namespaces/openshift-compliance/profiles/rhcos4-moderate uid: <uid string> rules: 3 - rhcos4-account-disable-post-pw-expiration - rhcos4-accounts-no-uid-except-zero - rhcos4-audit-rules-dac-modification-chmod - rhcos4-audit-rules-dac-modification-chown title: <title of the profile>", "apiVersion: compliance.openshift.io/v1alpha1 checkType: Platform 1 description: <description of the rule> id: xccdf_org.ssgproject.content_rule_configure_network_policies_namespaces 2 instructions: <manual instructions for the scan> kind: Rule metadata: annotations: compliance.openshift.io/rule: configure-network-policies-namespaces control.compliance.openshift.io/CIS-OCP: 5.3.2 control.compliance.openshift.io/NERC-CIP: CIP-003-3 R4;CIP-003-3 R4.2;CIP-003-3 R5;CIP-003-3 R6;CIP-004-3 R2.2.4;CIP-004-3 R3;CIP-007-3 R2;CIP-007-3 R2.1;CIP-007-3 R2.2;CIP-007-3 R2.3;CIP-007-3 R5.1;CIP-007-3 R6.1 control.compliance.openshift.io/NIST-800-53: AC-4;AC-4(21);CA-3(5);CM-6;CM-6(1);CM-7;CM-7(1);SC-7;SC-7(3);SC-7(5);SC-7(8);SC-7(12);SC-7(13);SC-7(18) labels: compliance.openshift.io/profile-bundle: ocp4 name: ocp4-configure-network-policies-namespaces namespace: openshift-compliance rationale: <description of why this rule is checked> severity: high 3 title: <summary of the rule>", "apiVersion: compliance.openshift.io/v1alpha1 kind: TailoredProfile metadata: name: rhcos4-with-usb spec: extends: rhcos4-moderate 1 title: <title of the tailored profile> disableRules: - name: <name of a rule object to be disabled> rationale: <description of why this rule is checked> status: id: xccdf_compliance.openshift.io_profile_rhcos4-with-usb 2 outputRef: name: rhcos4-with-usb-tp 3 namespace: openshift-compliance state: READY 4", "compliance.openshift.io/product-type: Platform/Node", "apiVersion: compliance.openshift.io/v1alpha1 autoApplyRemediations: true 1 autoUpdateRemediations: true 2 kind: ScanSetting maxRetryOnTimeout: 3 metadata: creationTimestamp: \"2022-10-18T20:21:00Z\" generation: 1 name: default-auto-apply namespace: openshift-compliance resourceVersion: \"38840\" uid: 8cb0967d-05e0-4d7a-ac1c-08a7f7e89e84 rawResultStorage: nodeSelector: node-role.kubernetes.io/master: \"\" pvAccessModes: - ReadWriteOnce rotation: 3 3 size: 1Gi 4 tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists roles: 5 - master - worker scanTolerations: - operator: Exists schedule: 0 1 * * * 6 showNotApplicable: false strictNodeScan: true timeout: 30m", "apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: <name of the scan> profiles: 1 # Node checks - name: rhcos4-with-usb kind: TailoredProfile apiGroup: compliance.openshift.io/v1alpha1 # Cluster checks - name: ocp4-moderate kind: Profile apiGroup: compliance.openshift.io/v1alpha1 settingsRef: 2 name: my-companys-constraints kind: ScanSetting apiGroup: compliance.openshift.io/v1alpha1", "oc get compliancesuites", "apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceSuite metadata: name: <name of the scan> spec: autoApplyRemediations: false 1 schedule: \"0 1 * * *\" 2 scans: 3 - name: workers-scan scanType: Node profile: xccdf_org.ssgproject.content_profile_moderate content: ssg-rhcos4-ds.xml contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:45dc rule: \"xccdf_org.ssgproject.content_rule_no_netrc_files\" nodeSelector: node-role.kubernetes.io/worker: \"\" status: Phase: DONE 4 Result: NON-COMPLIANT 5 scanStatuses: - name: workers-scan phase: DONE result: NON-COMPLIANT", "oc get events --field-selector involvedObject.kind=ComplianceSuite,involvedObject.name=<name of the suite>", "apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceScan metadata: name: <name of the scan> spec: scanType: Node 1 profile: xccdf_org.ssgproject.content_profile_moderate 2 content: ssg-ocp4-ds.xml contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:45dc... 3 rule: \"xccdf_org.ssgproject.content_rule_no_netrc_files\" 4 nodeSelector: 5 node-role.kubernetes.io/worker: \"\" status: phase: DONE 6 result: NON-COMPLIANT 7", "get events --field-selector involvedObject.kind=ComplianceScan,involvedObject.name=<name of the suite>", "apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceCheckResult metadata: labels: compliance.openshift.io/check-severity: medium compliance.openshift.io/check-status: FAIL compliance.openshift.io/suite: example-compliancesuite compliance.openshift.io/scan-name: workers-scan name: workers-scan-no-direct-root-logins namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ComplianceScan name: workers-scan description: <description of scan check> instructions: <manual instructions for the scan> id: xccdf_org.ssgproject.content_rule_no_direct_root_logins severity: medium 1 status: FAIL 2", "get compliancecheckresults -l compliance.openshift.io/suite=workers-compliancesuite", "apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceRemediation metadata: labels: compliance.openshift.io/suite: example-compliancesuite compliance.openshift.io/scan-name: workers-scan machineconfiguration.openshift.io/role: worker name: workers-scan-disable-users-coredumps namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ComplianceCheckResult name: workers-scan-disable-users-coredumps uid: <UID> spec: apply: false 1 object: current: 2 apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig spec: config: ignition: version: 2.2.0 storage: files: - contents: source: data:,%2A%20%20%20%20%20hard%20%20%20core%20%20%20%200 filesystem: root mode: 420 path: /etc/security/limits.d/75-disable_users_coredumps.conf outdated: {} 3", "get complianceremediations -l compliance.openshift.io/suite=workers-compliancesuite", "get compliancecheckresults -l 'compliance.openshift.io/check-status in (FAIL),compliance.openshift.io/automated-remediation'", "get compliancecheckresults -l 'compliance.openshift.io/check-status in (FAIL),!compliance.openshift.io/automated-remediation'", "apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: \"true\" pod-security.kubernetes.io/enforce: privileged 1 name: openshift-compliance", "oc create -f namespace-object.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: compliance-operator namespace: openshift-compliance spec: targetNamespaces: - openshift-compliance", "oc create -f operator-group-object.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: compliance-operator-sub namespace: openshift-compliance spec: channel: \"stable\" installPlanApproval: Automatic name: compliance-operator source: redhat-operators sourceNamespace: openshift-marketplace", "oc create -f subscription-object.yaml", "oc get csv -n openshift-compliance", "oc get deploy -n openshift-compliance", "apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: \"true\" pod-security.kubernetes.io/enforce: privileged 1 name: openshift-compliance", "oc create -f namespace-object.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: compliance-operator namespace: openshift-compliance spec: targetNamespaces: - openshift-compliance", "oc create -f operator-group-object.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: compliance-operator-sub namespace: openshift-compliance spec: channel: \"stable\" installPlanApproval: Automatic name: compliance-operator source: redhat-operators sourceNamespace: openshift-marketplace config: nodeSelector: node-role.kubernetes.io/worker: \"\" 1", "oc create -f subscription-object.yaml", "oc get csv -n openshift-compliance", "oc get deploy -n openshift-compliance", "apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: \"true\" pod-security.kubernetes.io/enforce: privileged 1 name: openshift-compliance", "oc create -f namespace-object.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: compliance-operator namespace: openshift-compliance spec: targetNamespaces: - openshift-compliance", "oc create -f operator-group-object.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: compliance-operator-sub namespace: openshift-compliance spec: channel: \"stable\" installPlanApproval: Automatic name: compliance-operator source: redhat-operators sourceNamespace: openshift-marketplace config: nodeSelector: node-role.kubernetes.io/worker: \"\" env: - name: PLATFORM value: \"HyperShift\"", "oc create -f subscription-object.yaml", "oc get csv -n openshift-compliance", "oc get deploy -n openshift-compliance", "apiVersion: compliance.openshift.io/v1alpha1 kind: ProfileBundle metadata: creationTimestamp: \"2022-10-19T12:06:30Z\" finalizers: - profilebundle.finalizers.compliance.openshift.io generation: 1 name: rhcos4 namespace: openshift-compliance resourceVersion: \"46741\" uid: 22350850-af4a-4f5c-9a42-5e7b68b82d7d spec: contentFile: ssg-rhcos4-ds.xml 1 contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:900e... 2 status: conditions: - lastTransitionTime: \"2022-10-19T12:07:51Z\" message: Profile bundle successfully parsed reason: Valid status: \"True\" type: Ready dataStreamStatus: VALID", "oc -n openshift-compliance get profilebundles rhcos4 -oyaml", "apiVersion: compliance.openshift.io/v1alpha1 kind: ProfileBundle metadata: creationTimestamp: \"2022-10-19T12:06:30Z\" finalizers: - profilebundle.finalizers.compliance.openshift.io generation: 1 name: rhcos4 namespace: openshift-compliance resourceVersion: \"46741\" uid: 22350850-af4a-4f5c-9a42-5e7b68b82d7d spec: contentFile: ssg-rhcos4-ds.xml contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:900e... 1 status: conditions: - lastTransitionTime: \"2022-10-19T12:07:51Z\" message: Profile bundle successfully parsed reason: Valid status: \"True\" type: Ready dataStreamStatus: VALID", "oc delete ssb --all -n openshift-compliance", "oc delete ss --all -n openshift-compliance", "oc delete suite --all -n openshift-compliance", "oc delete scan --all -n openshift-compliance", "oc delete profilebundle.compliance --all -n openshift-compliance", "oc delete sub --all -n openshift-compliance", "oc delete csv --all -n openshift-compliance", "oc delete project openshift-compliance", "project.project.openshift.io \"openshift-compliance\" deleted", "oc get project/openshift-compliance", "Error from server (NotFound): namespaces \"openshift-compliance\" not found", "oc explain scansettings", "oc explain scansettingbindings", "oc describe scansettings default -n openshift-compliance", "Name: default Namespace: openshift-compliance Labels: <none> Annotations: <none> API Version: compliance.openshift.io/v1alpha1 Kind: ScanSetting Max Retry On Timeout: 3 Metadata: Creation Timestamp: 2024-07-16T14:56:42Z Generation: 2 Resource Version: 91655682 UID: 50358cf1-57a8-4f69-ac50-5c7a5938e402 Raw Result Storage: Node Selector: node-role.kubernetes.io/master: Pv Access Modes: ReadWriteOnce 1 Rotation: 3 2 Size: 1Gi 3 Storage Class Name: standard 4 Tolerations: Effect: NoSchedule Key: node-role.kubernetes.io/master Operator: Exists Effect: NoExecute Key: node.kubernetes.io/not-ready Operator: Exists Toleration Seconds: 300 Effect: NoExecute Key: node.kubernetes.io/unreachable Operator: Exists Toleration Seconds: 300 Effect: NoSchedule Key: node.kubernetes.io/memory-pressure Operator: Exists Roles: master 5 worker 6 Scan Tolerations: 7 Operator: Exists Schedule: 0 1 * * * 8 Show Not Applicable: false Strict Node Scan: true Suspend: false Timeout: 30m Events: <none>", "Name: default-auto-apply Namespace: openshift-compliance Labels: <none> Annotations: <none> API Version: compliance.openshift.io/v1alpha1 Auto Apply Remediations: true 1 Auto Update Remediations: true 2 Kind: ScanSetting Metadata: Creation Timestamp: 2022-10-18T20:21:00Z Generation: 1 Managed Fields: API Version: compliance.openshift.io/v1alpha1 Fields Type: FieldsV1 fieldsV1: f:autoApplyRemediations: f:autoUpdateRemediations: f:rawResultStorage: .: f:nodeSelector: .: f:node-role.kubernetes.io/master: f:pvAccessModes: f:rotation: f:size: f:tolerations: f:roles: f:scanTolerations: f:schedule: f:showNotApplicable: f:strictNodeScan: Manager: compliance-operator Operation: Update Time: 2022-10-18T20:21:00Z Resource Version: 38840 UID: 8cb0967d-05e0-4d7a-ac1c-08a7f7e89e84 Raw Result Storage: Node Selector: node-role.kubernetes.io/master: Pv Access Modes: ReadWriteOnce Rotation: 3 Size: 1Gi Tolerations: Effect: NoSchedule Key: node-role.kubernetes.io/master Operator: Exists Effect: NoExecute Key: node.kubernetes.io/not-ready Operator: Exists Toleration Seconds: 300 Effect: NoExecute Key: node.kubernetes.io/unreachable Operator: Exists Toleration Seconds: 300 Effect: NoSchedule Key: node.kubernetes.io/memory-pressure Operator: Exists Roles: master worker Scan Tolerations: Operator: Exists Schedule: 0 1 * * * Show Not Applicable: false Strict Node Scan: true Events: <none>", "apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: cis-compliance namespace: openshift-compliance profiles: - name: ocp4-cis-node kind: Profile apiGroup: compliance.openshift.io/v1alpha1 - name: ocp4-cis kind: Profile apiGroup: compliance.openshift.io/v1alpha1 settingsRef: name: default kind: ScanSetting apiGroup: compliance.openshift.io/v1alpha1", "oc create -f <file-name>.yaml -n openshift-compliance", "oc get compliancescan -w -n openshift-compliance", "apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: default namespace: openshift-compliance rawResultStorage: storageClassName: standard rotation: 10 size: 10Gi roles: - worker - master scanTolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists schedule: '0 1 * * *'", "apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: rs-on-workers namespace: openshift-compliance rawResultStorage: nodeSelector: node-role.kubernetes.io/worker: \"\" 1 pvAccessModes: - ReadWriteOnce rotation: 3 size: 1Gi tolerations: - operator: Exists 2 roles: - worker - master scanTolerations: - operator: Exists schedule: 0 1 * * *", "oc create -f rs-workers.yaml", "oc get scansettings rs-on-workers -n openshift-compliance -o yaml", "apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: creationTimestamp: \"2021-11-19T19:36:36Z\" generation: 1 name: rs-on-workers namespace: openshift-compliance resourceVersion: \"48305\" uid: 43fdfc5f-15a7-445a-8bbc-0e4a160cd46e rawResultStorage: nodeSelector: node-role.kubernetes.io/worker: \"\" pvAccessModes: - ReadWriteOnce rotation: 3 size: 1Gi tolerations: - operator: Exists roles: - worker - master scanTolerations: - operator: Exists schedule: 0 1 * * * strictNodeScan: true", "oc get hostedcluster -A", "NAMESPACE NAME VERSION KUBECONFIG PROGRESS AVAILABLE PROGRESSING MESSAGE local-cluster 79136a1bdb84b3c13217 4.13.5 79136a1bdb84b3c13217-admin-kubeconfig Completed True False The hosted control plane is available", "apiVersion: compliance.openshift.io/v1alpha1 kind: TailoredProfile metadata: name: hypershift-cisk57aw88gry namespace: openshift-compliance spec: description: This profile test required rules extends: ocp4-cis 1 title: Management namespace profile setValues: - name: ocp4-hypershift-cluster rationale: This value is used for HyperShift version detection value: 79136a1bdb84b3c13217 2 - name: ocp4-hypershift-namespace-prefix rationale: This value is used for HyperShift control plane namespace detection value: local-cluster 3", "oc create -n openshift-compliance -f mgmt-tp.yaml", "spec.containers[].resources.limits.cpu spec.containers[].resources.limits.memory spec.containers[].resources.limits.hugepages-<size> spec.containers[].resources.requests.cpu spec.containers[].resources.requests.memory spec.containers[].resources.requests.hugepages-<size>", "apiVersion: v1 kind: Pod metadata: name: frontend spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: app image: images.my-company.example/app:v4 resources: requests: 1 memory: \"64Mi\" cpu: \"250m\" limits: 2 memory: \"128Mi\" cpu: \"500m\" securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] - name: log-aggregator image: images.my-company.example/log-aggregator:v6 resources: requests: memory: \"64Mi\" cpu: \"250m\" limits: memory: \"128Mi\" cpu: \"500m\" securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]", "apiVersion: compliance.openshift.io/v1alpha1 kind: TailoredProfile metadata: name: new-profile annotations: compliance.openshift.io/product-type: Node 1 spec: extends: ocp4-cis-node 2 description: My custom profile 3 title: Custom profile 4 enableRules: - name: ocp4-etcd-unique-ca rationale: We really need to enable this disableRules: - name: ocp4-file-groupowner-cni-conf rationale: This does not apply to the cluster", "oc get rules.compliance -n openshift-compliance -l compliance.openshift.io/profile-bundle=rhcos4", "oc get variables.compliance -n openshift-compliance -l compliance.openshift.io/profile-bundle=rhcos4", "apiVersion: compliance.openshift.io/v1alpha1 kind: TailoredProfile metadata: name: nist-moderate-modified spec: extends: rhcos4-moderate description: NIST moderate profile title: My modified NIST moderate profile disableRules: - name: rhcos4-file-permissions-var-log-messages rationale: The file contains logs of error messages in the system - name: rhcos4-account-disable-post-pw-expiration rationale: No need to check this as it comes from the IdP setValues: - name: rhcos4-var-selinux-state rationale: Organizational requirements value: permissive", "apiVersion: compliance.openshift.io/v1alpha1 kind: TailoredProfile metadata: name: ocp4-manual-scc-check spec: extends: ocp4-cis description: This profile extends ocp4-cis by forcing the SCC check to always return MANUAL title: OCP4 CIS profile with manual SCC check manualRules: - name: ocp4-scc-limit-container-allowed-capabilities rationale: We use third party software that installs its own SCC with extra privileges", "oc create -n openshift-compliance -f new-profile-node.yaml 1", "tailoredprofile.compliance.openshift.io/nist-moderate-modified created", "apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: nist-moderate-modified profiles: - apiGroup: compliance.openshift.io/v1alpha1 kind: Profile name: ocp4-moderate - apiGroup: compliance.openshift.io/v1alpha1 kind: TailoredProfile name: nist-moderate-modified settingsRef: apiGroup: compliance.openshift.io/v1alpha1 kind: ScanSetting name: default", "oc create -n openshift-compliance -f new-scansettingbinding.yaml", "scansettingbinding.compliance.openshift.io/nist-moderate-modified created", "oc get compliancesuites nist-moderate-modified -o json -n openshift-compliance | jq '.status.scanStatuses[].resultsStorage'", "{ \"name\": \"ocp4-moderate\", \"namespace\": \"openshift-compliance\" } { \"name\": \"nist-moderate-modified-master\", \"namespace\": \"openshift-compliance\" } { \"name\": \"nist-moderate-modified-worker\", \"namespace\": \"openshift-compliance\" }", "oc get pvc -n openshift-compliance rhcos4-moderate-worker", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE rhcos4-moderate-worker Bound pvc-548f6cfe-164b-42fe-ba13-a07cfbc77f3a 1Gi RWO gp2 92m", "oc create -n openshift-compliance -f pod.yaml", "apiVersion: \"v1\" kind: Pod metadata: name: pv-extract spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: pv-extract-pod image: registry.access.redhat.com/ubi9/ubi command: [\"sleep\", \"3000\"] volumeMounts: - mountPath: \"/workers-scan-results\" name: workers-scan-vol securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: workers-scan-vol persistentVolumeClaim: claimName: rhcos4-moderate-worker", "oc cp pv-extract:/workers-scan-results -n openshift-compliance .", "oc delete pod pv-extract -n openshift-compliance", "oc get -n openshift-compliance compliancecheckresults -l compliance.openshift.io/suite=workers-compliancesuite", "oc get -n openshift-compliance compliancecheckresults -l compliance.openshift.io/scan=workers-scan", "oc get -n openshift-compliance compliancecheckresults -l 'compliance.openshift.io/check-status=FAIL,compliance.openshift.io/automated-remediation'", "oc get compliancecheckresults -n openshift-compliance -l 'compliance.openshift.io/check-status=FAIL,compliance.openshift.io/check-severity=high'", "NAME STATUS SEVERITY nist-moderate-modified-master-configure-crypto-policy FAIL high nist-moderate-modified-master-coreos-pti-kernel-argument FAIL high nist-moderate-modified-master-disable-ctrlaltdel-burstaction FAIL high nist-moderate-modified-master-disable-ctrlaltdel-reboot FAIL high nist-moderate-modified-master-enable-fips-mode FAIL high nist-moderate-modified-master-no-empty-passwords FAIL high nist-moderate-modified-master-selinux-state FAIL high nist-moderate-modified-worker-configure-crypto-policy FAIL high nist-moderate-modified-worker-coreos-pti-kernel-argument FAIL high nist-moderate-modified-worker-disable-ctrlaltdel-burstaction FAIL high nist-moderate-modified-worker-disable-ctrlaltdel-reboot FAIL high nist-moderate-modified-worker-enable-fips-mode FAIL high nist-moderate-modified-worker-no-empty-passwords FAIL high nist-moderate-modified-worker-selinux-state FAIL high ocp4-moderate-configure-network-policies-namespaces FAIL high ocp4-moderate-fips-mode-enabled-on-all-nodes FAIL high", "oc get -n openshift-compliance compliancecheckresults -l 'compliance.openshift.io/check-status=FAIL,!compliance.openshift.io/automated-remediation'", "spec: apply: false current: object: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/sysctl.d/75-sysctl_net_ipv4_conf_all_accept_redirects.conf mode: 0644 contents: source: data:,net.ipv4.conf.all.accept_redirects%3D0 outdated: {} status: applicationState: NotApplied", "echo \"net.ipv4.conf.all.accept_redirects%3D0\" | python3 -c \"import sys, urllib.parse; print(urllib.parse.unquote(''.join(sys.stdin.readlines())))\"", "net.ipv4.conf.all.accept_redirects=0", "oc get nodes -n openshift-compliance", "NAME STATUS ROLES AGE VERSION ip-10-0-128-92.us-east-2.compute.internal Ready master 5h21m v1.29.4 ip-10-0-158-32.us-east-2.compute.internal Ready worker 5h17m v1.29.4 ip-10-0-166-81.us-east-2.compute.internal Ready worker 5h17m v1.29.4 ip-10-0-171-170.us-east-2.compute.internal Ready master 5h21m v1.29.4 ip-10-0-197-35.us-east-2.compute.internal Ready master 5h22m v1.29.4", "oc -n openshift-compliance label node ip-10-0-166-81.us-east-2.compute.internal node-role.kubernetes.io/<machine_config_pool_name>=", "node/ip-10-0-166-81.us-east-2.compute.internal labeled", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: <machine_config_pool_name> labels: pools.operator.machineconfiguration.openshift.io/<machine_config_pool_name>: '' 1 spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,<machine_config_pool_name>]} nodeSelector: matchLabels: node-role.kubernetes.io/<machine_config_pool_name>: \"\"", "oc get mcp -w", "apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: default namespace: openshift-compliance rawResultStorage: rotation: 3 size: 1Gi roles: - worker - master - example scanTolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists schedule: '0 1 * * *'", "apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: cis namespace: openshift-compliance profiles: - apiGroup: compliance.openshift.io/v1alpha1 kind: Profile name: ocp4-cis - apiGroup: compliance.openshift.io/v1alpha1 kind: Profile name: ocp4-cis-node settingsRef: apiGroup: compliance.openshift.io/v1alpha1 kind: ScanSetting name: default", "oc get rules -o json | jq '.items[] | select(.checkType == \"Platform\") | select(.metadata.name | contains(\"ocp4-kubelet-\")) | .metadata.name'", "oc label mcp <sub-pool-name> pools.operator.machineconfiguration.openshift.io/<sub-pool-name>=", "oc -n openshift-compliance patch complianceremediations/<scan-name>-sysctl-net-ipv4-conf-all-accept-redirects --patch '{\"spec\":{\"apply\":true}}' --type=merge", "oc edit image.config.openshift.io/cluster", "apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: \"2020-09-10T10:12:54Z\" generation: 2 name: cluster resourceVersion: \"363096\" selfLink: /apis/config.openshift.io/v1/images/cluster uid: 2dcb614e-2f8a-4a23-ba9a-8e33cd0ff77e spec: allowedRegistriesForImport: - domainName: registry.redhat.io status: externalRegistryHostnames: - default-route-openshift-image-registry.apps.user-cluster-09-10-12-07.devcluster.openshift.com internalRegistryHostname: image-registry.openshift-image-registry.svc:5000", "oc -n openshift-compliance annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan=", "oc -n openshift-compliance get complianceremediations -l complianceoperator.openshift.io/outdated-remediation=", "NAME STATE workers-scan-no-empty-passwords Outdated", "oc -n openshift-compliance patch complianceremediations workers-scan-no-empty-passwords --type json -p '[{\"op\":\"remove\", \"path\":/spec/outdated}]'", "oc get -n openshift-compliance complianceremediations workers-scan-no-empty-passwords", "NAME STATE workers-scan-no-empty-passwords Applied", "oc -n openshift-compliance patch complianceremediations/rhcos4-moderate-worker-sysctl-net-ipv4-conf-all-accept-redirects --patch '{\"spec\":{\"apply\":false}}' --type=merge", "oc -n openshift-compliance get remediation \\ one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available -o yaml", "apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceRemediation metadata: annotations: compliance.openshift.io/xccdf-value-used: var-kubelet-evictionhard-imagefs-available creationTimestamp: \"2022-01-05T19:52:27Z\" generation: 1 labels: compliance.openshift.io/scan-name: one-rule-tp-node-master 1 compliance.openshift.io/suite: one-rule-ssb-node name: one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ComplianceCheckResult name: one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available uid: fe8e1577-9060-4c59-95b2-3e2c51709adc resourceVersion: \"84820\" uid: 5339d21a-24d7-40cb-84d2-7a2ebb015355 spec: apply: true current: object: apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig spec: kubeletConfig: evictionHard: imagefs.available: 10% 2 outdated: {} type: Configuration status: applicationState: Applied", "oc -n openshift-compliance patch complianceremediations/one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available -p '{\"spec\":{\"apply\":false}}' --type=merge", "oc -n openshift-compliance get kubeletconfig --selector compliance.openshift.io/scan-name=one-rule-tp-node-master", "NAME AGE compliance-operator-kubelet-master 2m34s", "oc edit -n openshift-compliance KubeletConfig compliance-operator-kubelet-master", "oc -n openshift-compliance annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan=", "apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceSuite metadata: name: workers-compliancesuite spec: scans: - name: workers-scan profile: xccdf_org.ssgproject.content_profile_moderate content: ssg-rhcos4-ds.xml contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:45dc debug: true rule: xccdf_org.ssgproject.content_rule_no_direct_root_logins nodeSelector: node-role.kubernetes.io/worker: \"\"", "apiVersion: compliance.openshift.io/v1alpha1 strictNodeScan: true metadata: name: default namespace: openshift-compliance priorityClass: compliance-high-priority 1 kind: ScanSetting showNotApplicable: false rawResultStorage: nodeSelector: node-role.kubernetes.io/master: '' pvAccessModes: - ReadWriteOnce rotation: 3 size: 1Gi tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists schedule: 0 1 * * * roles: - master - worker scanTolerations: - operator: Exists", "oc -n openshift-compliance create configmap nist-moderate-modified --from-file=tailoring.xml=/path/to/the/tailoringFile.xml", "apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceSuite metadata: name: workers-compliancesuite spec: debug: true scans: - name: workers-scan profile: xccdf_org.ssgproject.content_profile_moderate content: ssg-rhcos4-ds.xml contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:45dc debug: true tailoringConfigMap: name: nist-moderate-modified nodeSelector: node-role.kubernetes.io/worker: \"\"", "oc -n openshift-compliance annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan=", "oc get mc", "75-worker-scan-chronyd-or-ntpd-specify-remote-server 75-worker-scan-configure-usbguard-auditbackend 75-worker-scan-service-usbguard-enabled 75-worker-scan-usbguard-allow-hid-and-hub", "apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: default namespace: openshift-compliance rawResultStorage: storageClassName: standard rotation: 10 size: 10Gi roles: - worker - master scanTolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists schedule: '0 1 * * *'", "oc -n openshift-compliance annotate compliancesuites/workers-compliancesuite compliance.openshift.io/apply-remediations=", "oc -n openshift-compliance annotate compliancesuites/workers-compliancesuite compliance.openshift.io/remove-outdated=", "allowHostDirVolumePlugin: false allowHostIPC: false allowHostNetwork: false allowHostPID: false allowHostPorts: false allowPrivilegeEscalation: true allowPrivilegedContainer: false allowedCapabilities: null apiVersion: security.openshift.io/v1 defaultAddCapabilities: null fsGroup: type: MustRunAs kind: SecurityContextConstraints metadata: name: restricted-adjusted-compliance priority: 30 1 readOnlyRootFilesystem: false requiredDropCapabilities: - KILL - SETUID - SETGID - MKNOD runAsUser: type: MustRunAsRange seLinuxContext: type: MustRunAs supplementalGroups: type: RunAsAny users: - system:serviceaccount:openshift-compliance:api-resource-collector 2 volumes: - configMap - downwardAPI - emptyDir - persistentVolumeClaim - projected - secret", "oc create -n openshift-compliance -f restricted-adjusted-compliance.yaml", "securitycontextconstraints.security.openshift.io/restricted-adjusted-compliance created", "oc get -n openshift-compliance scc restricted-adjusted-compliance", "NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP PRIORITY READONLYROOTFS VOLUMES restricted-adjusted-compliance false <no value> MustRunAs MustRunAsRange MustRunAs RunAsAny 30 false [\"configMap\",\"downwardAPI\",\"emptyDir\",\"persistentVolumeClaim\",\"projected\",\"secret\"]", "oc get events -n openshift-compliance", "oc describe -n openshift-compliance compliancescan/cis-compliance", "oc -n openshift-compliance logs compliance-operator-775d7bddbd-gj58f | jq -c 'select(.logger == \"profilebundlectrl\")'", "date -d @1596184628.955853 --utc", "oc get -n openshift-compliance profilebundle.compliance", "oc get -n openshift-compliance profile.compliance", "oc logs -n openshift-compliance -lprofile-bundle=ocp4 -c profileparser", "oc get -n openshift-compliance deployments,pods -lprofile-bundle=ocp4", "oc logs -n openshift-compliance pods/<pod-name>", "oc describe -n openshift-compliance pod/<pod-name> -c profileparser", "apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: my-companys-constraints debug: true For each role, a separate scan will be created pointing to a node-role specified in roles roles: - worker --- apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: my-companys-compliance-requirements profiles: # Node checks - name: rhcos4-e8 kind: Profile apiGroup: compliance.openshift.io/v1alpha1 # Cluster checks - name: ocp4-e8 kind: Profile apiGroup: compliance.openshift.io/v1alpha1 settingsRef: name: my-companys-constraints kind: ScanSetting apiGroup: compliance.openshift.io/v1alpha1", "Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuiteCreated 9m52s scansettingbindingctrl ComplianceSuite openshift-compliance/my-companys-compliance-requirements created", "oc get cronjobs", "NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE <cron_name> 0 1 * * * False 0 <none> 151m", "oc -n openshift-compliance get cm -l compliance.openshift.io/scan-name=rhcos4-e8-worker,complianceoperator.openshift.io/scan-script=", "oc get pvc -n openshift-compliance -lcompliance.openshift.io/scan-name=rhcos4-e8-worker", "oc get pods -lcompliance.openshift.io/scan-name=rhcos4-e8-worker,workload=scanner --show-labels", "NAME READY STATUS RESTARTS AGE LABELS rhcos4-e8-worker-ip-10-0-169-90.eu-north-1.compute.internal-pod 0/2 Completed 0 39m compliance.openshift.io/scan-name=rhcos4-e8-worker,targetNode=ip-10-0-169-90.eu-north-1.compute.internal,workload=scanner", "oc describe cm/rhcos4-e8-worker-ip-10-0-169-90.eu-north-1.compute.internal-pod", "Name: rhcos4-e8-worker-ip-10-0-169-90.eu-north-1.compute.internal-pod Namespace: openshift-compliance Labels: compliance.openshift.io/scan-name-scan=rhcos4-e8-worker complianceoperator.openshift.io/scan-result= Annotations: compliance-remediations/processed: compliance.openshift.io/scan-error-msg: compliance.openshift.io/scan-result: NON-COMPLIANT OpenSCAP-scan-result/node: ip-10-0-169-90.eu-north-1.compute.internal Data ==== exit-code: ---- 2 results: ---- <?xml version=\"1.0\" encoding=\"UTF-8\"?>", "oc get compliancecheckresults -lcompliance.openshift.io/scan-name=rhcos4-e8-worker", "NAME STATUS SEVERITY rhcos4-e8-worker-accounts-no-uid-except-zero PASS high rhcos4-e8-worker-audit-rules-dac-modification-chmod FAIL medium", "oc get complianceremediations -lcompliance.openshift.io/scan-name=rhcos4-e8-worker", "NAME STATE rhcos4-e8-worker-audit-rules-dac-modification-chmod NotApplied rhcos4-e8-worker-audit-rules-dac-modification-chown NotApplied rhcos4-e8-worker-audit-rules-execution-chcon NotApplied rhcos4-e8-worker-audit-rules-execution-restorecon NotApplied rhcos4-e8-worker-audit-rules-execution-semanage NotApplied rhcos4-e8-worker-audit-rules-execution-setfiles NotApplied", "oc -n openshift-compliance annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan=", "oc patch complianceremediations/rhcos4-e8-worker-audit-rules-dac-modification-chmod --patch '{\"spec\":{\"apply\":true}}' --type=merge", "oc get mc | grep 75-", "75-rhcos4-e8-worker-my-companys-compliance-requirements 3.2.0 2m46s", "oc describe mc/75-rhcos4-e8-worker-my-companys-compliance-requirements", "Name: 75-rhcos4-e8-worker-my-companys-compliance-requirements Labels: machineconfiguration.openshift.io/role=worker Annotations: remediation/rhcos4-e8-worker-audit-rules-dac-modification-chmod:", "oc -n openshift-compliance annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan=", "oc -n openshift-compliance get compliancecheckresults/rhcos4-e8-worker-audit-rules-dac-modification-chmod", "NAME STATUS SEVERITY rhcos4-e8-worker-audit-rules-dac-modification-chmod PASS medium", "oc logs -l workload=<workload_name> -c <container_name>", "spec: config: resources: limits: memory: 500Mi", "oc patch sub compliance-operator -nopenshift-compliance --patch-file co-memlimit-patch.yaml --type=merge", "kind: Subscription metadata: name: compliance-operator namespace: openshift-compliance spec: package: package-name channel: stable config: resources: requests: memory: \"64Mi\" cpu: \"250m\" limits: memory: \"128Mi\" cpu: \"500m\"", "oc get pod ocp4-pci-dss-api-checks-pod -w", "NAME READY STATUS RESTARTS AGE ocp4-pci-dss-api-checks-pod 0/2 Init:1/2 8 (5m56s ago) 25m ocp4-pci-dss-api-checks-pod 0/2 Init:OOMKilled 8 (6m19s ago) 26m", "timeout: 30m strictNodeScan: true metadata: name: default namespace: openshift-compliance kind: ScanSetting showNotApplicable: false rawResultStorage: nodeSelector: node-role.kubernetes.io/master: '' pvAccessModes: - ReadWriteOnce rotation: 3 size: 1Gi tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists schedule: 0 1 * * * roles: - master - worker apiVersion: compliance.openshift.io/v1alpha1 maxRetryOnTimeout: 3 scanTolerations: - operator: Exists scanLimits: memory: 1024Mi 1", "oc apply -f scansetting.yaml", "apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: default namespace: openshift-compliance rawResultStorage: rotation: 3 size: 1Gi roles: - worker - master scanTolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists schedule: '0 1 * * *' timeout: '10m0s' 1 maxRetryOnTimeout: 3 2", "podman run --rm -v ~/.local/bin:/mnt/out:Z registry.redhat.io/compliance/oc-compliance-rhel8:stable /bin/cp /usr/bin/oc-compliance /mnt/out/", "W0611 20:35:46.486903 11354 manifest.go:440] Chose linux/amd64 manifest from the manifest list.", "oc compliance fetch-raw <object-type> <object-name> -o <output-path>", "oc compliance fetch-raw scansettingbindings my-binding -o /tmp/", "Fetching results for my-binding scans: ocp4-cis, ocp4-cis-node-worker, ocp4-cis-node-master Fetching raw compliance results for scan 'ocp4-cis'.... The raw compliance results are available in the following directory: /tmp/ocp4-cis Fetching raw compliance results for scan 'ocp4-cis-node-worker'........ The raw compliance results are available in the following directory: /tmp/ocp4-cis-node-worker Fetching raw compliance results for scan 'ocp4-cis-node-master'... The raw compliance results are available in the following directory: /tmp/ocp4-cis-node-master", "ls /tmp/ocp4-cis-node-master/", "ocp4-cis-node-master-ip-10-0-128-89.ec2.internal-pod.xml.bzip2 ocp4-cis-node-master-ip-10-0-150-5.ec2.internal-pod.xml.bzip2 ocp4-cis-node-master-ip-10-0-163-32.ec2.internal-pod.xml.bzip2", "bunzip2 -c resultsdir/worker-scan/worker-scan-stage-459-tqkg7-compute-0-pod.xml.bzip2 > resultsdir/worker-scan/worker-scan-ip-10-0-170-231.us-east-2.compute.internal-pod.xml", "ls resultsdir/worker-scan/", "worker-scan-ip-10-0-170-231.us-east-2.compute.internal-pod.xml worker-scan-stage-459-tqkg7-compute-0-pod.xml.bzip2 worker-scan-stage-459-tqkg7-compute-1-pod.xml.bzip2", "oc compliance rerun-now scansettingbindings my-binding", "Rerunning scans from 'my-binding': ocp4-cis Re-running scan 'openshift-compliance/ocp4-cis'", "oc compliance bind [--dry-run] -N <binding name> [-S <scansetting name>] <objtype/objname> [..<objtype/objname>]", "oc get profile.compliance -n openshift-compliance", "NAME AGE VERSION ocp4-cis 3h49m 1.5.0 ocp4-cis-1-4 3h49m 1.4.0 ocp4-cis-1-5 3h49m 1.5.0 ocp4-cis-node 3h49m 1.5.0 ocp4-cis-node-1-4 3h49m 1.4.0 ocp4-cis-node-1-5 3h49m 1.5.0 ocp4-e8 3h49m ocp4-high 3h49m Revision 4 ocp4-high-node 3h49m Revision 4 ocp4-high-node-rev-4 3h49m Revision 4 ocp4-high-rev-4 3h49m Revision 4 ocp4-moderate 3h49m Revision 4 ocp4-moderate-node 3h49m Revision 4 ocp4-moderate-node-rev-4 3h49m Revision 4 ocp4-moderate-rev-4 3h49m Revision 4 ocp4-nerc-cip 3h49m ocp4-nerc-cip-node 3h49m ocp4-pci-dss 3h49m 3.2.1 ocp4-pci-dss-3-2 3h49m 3.2.1 ocp4-pci-dss-4-0 3h49m 4.0.0 ocp4-pci-dss-node 3h49m 3.2.1 ocp4-pci-dss-node-3-2 3h49m 3.2.1 ocp4-pci-dss-node-4-0 3h49m 4.0.0 ocp4-stig 3h49m V2R1 ocp4-stig-node 3h49m V2R1 ocp4-stig-node-v1r1 3h49m V1R1 ocp4-stig-node-v2r1 3h49m V2R1 ocp4-stig-v1r1 3h49m V1R1 ocp4-stig-v2r1 3h49m V2R1 rhcos4-e8 3h49m rhcos4-high 3h49m Revision 4 rhcos4-high-rev-4 3h49m Revision 4 rhcos4-moderate 3h49m Revision 4 rhcos4-moderate-rev-4 3h49m Revision 4 rhcos4-nerc-cip 3h49m rhcos4-stig 3h49m V2R1 rhcos4-stig-v1r1 3h49m V1R1 rhcos4-stig-v2r1 3h49m V2R1", "oc get scansettings -n openshift-compliance", "NAME AGE default 10m default-auto-apply 10m", "oc compliance bind -N my-binding profile/ocp4-cis profile/ocp4-cis-node", "Creating ScanSettingBinding my-binding", "oc compliance controls profile ocp4-cis-node", "+-----------+----------+ | FRAMEWORK | CONTROLS | +-----------+----------+ | CIS-OCP | 1.1.1 | + +----------+ | | 1.1.10 | + +----------+ | | 1.1.11 | + +----------+", "oc compliance fetch-fixes profile ocp4-cis -o /tmp", "No fixes to persist for rule 'ocp4-api-server-api-priority-flowschema-catch-all' 1 No fixes to persist for rule 'ocp4-api-server-api-priority-gate-enabled' No fixes to persist for rule 'ocp4-api-server-audit-log-maxbackup' Persisted rule fix to /tmp/ocp4-api-server-audit-log-maxsize.yaml No fixes to persist for rule 'ocp4-api-server-audit-log-path' No fixes to persist for rule 'ocp4-api-server-auth-mode-no-aa' No fixes to persist for rule 'ocp4-api-server-auth-mode-node' No fixes to persist for rule 'ocp4-api-server-auth-mode-rbac' No fixes to persist for rule 'ocp4-api-server-basic-auth' No fixes to persist for rule 'ocp4-api-server-bind-address' No fixes to persist for rule 'ocp4-api-server-client-ca' Persisted rule fix to /tmp/ocp4-api-server-encryption-provider-cipher.yaml Persisted rule fix to /tmp/ocp4-api-server-encryption-provider-config.yaml", "head /tmp/ocp4-api-server-audit-log-maxsize.yaml", "apiVersion: config.openshift.io/v1 kind: APIServer metadata: name: cluster spec: maximumFileSizeMegabytes: 100", "oc get complianceremediations -n openshift-compliance", "NAME STATE ocp4-cis-api-server-encryption-provider-cipher NotApplied ocp4-cis-api-server-encryption-provider-config NotApplied", "oc compliance fetch-fixes complianceremediations ocp4-cis-api-server-encryption-provider-cipher -o /tmp", "Persisted compliance remediation fix to /tmp/ocp4-cis-api-server-encryption-provider-cipher.yaml", "head /tmp/ocp4-cis-api-server-encryption-provider-cipher.yaml", "apiVersion: config.openshift.io/v1 kind: APIServer metadata: name: cluster spec: encryption: type: aescbc", "oc compliance view-result ocp4-cis-scheduler-no-bind-address" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/security_and_compliance/compliance-operator
Chapter 1. Introduction
Chapter 1. Introduction 1.1. About This Guide The Red Hat Enterprise Linux Virtualization Tuning and Optimization Guide contains details of configurable options and settings, and other suggestions that will help you achieve optimal performance of your Red Hat Enterprise Linux hosts and guest virtual machines. Following this introduction, the guide consists of the following sections: Virt-manager tuned Networking Memory Block I/O NUMA Performance Monitoring Tools
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_tuning_and_optimization_guide/chap-virtualization_tuning_optimization_guide-introduction
28.4.2. Standard ABRT Installation Supported Events
28.4.2. Standard ABRT Installation Supported Events Standard ABRT installation currently provides a number of default analyzing, collecting and reporting events. Some of these events are also configurable using the ABRT GUI application (for more information on event configuration using ABRT GUI, see Section 28.4.3, "Event Configuration in ABRT GUI" ). ABRT GUI only shows the event's unique part of the name which is more readable the user, instead of the complete event name. For example, the analyze_xsession_errors event is shown as Collect .xsession-errors in ABRT GUI. The following is a list of default analyzing, collecting and reporting events provided by the standard installation of ABRT : analyze_VMcore - Analyze VM core Runs GDB (the GNU debugger) on problem data of an application and generates a backtrace of the kernel. It is defined in the /etc/libreport/events.d/vmcore_event.conf configuration file. analyze_LocalGDB - Local GNU Debugger Runs GDB (the GNU debugger) on problem data of an application and generates a backtrace of a program. It is defined in the /etc/libreport/events.d/ccpp_event.conf configuration file. analyze_xsession_errors - Collect .xsession-errors Saves relevant lines from the ~/.xsession-errors file to the problem report. It is defined in the /etc/libreport/events.d/ccpp_event.conf configuration file. report_Logger - Logger Creates a problem report and saves it to a specified local file. It is defined in the /etc/libreport/events.d/print_event.conf configuration file. report_RHTSupport - Red Hat Customer Support Reports problems to the Red Hat Technical Support system. This possibility is intended for users of Red Hat Enterprise Linux. It is defined in the /etc/libreport/events.d/rhtsupport_event.conf configuration file. report_Mailx - Mailx Sends a problem report via the Mailx utility to a specified email address. It is defined in the /etc/libreport/events.d/mailx_event.conf configuration file. report_Kerneloops - Kerneloops.org Sends a kernel problem to the oops tracker. It is defined in the /etc/libreport/events.d/koops_event.conf configuration file. report_Uploader - Report uploader Uploads a tarball (.tar.gz) archive with problem data to the chosen destination using the FTP or the SCP protocol. It is defined in the /etc/libreport/events.d/uploader_event.conf configuration file.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sect-abrt-configuration-events-default_events
Preface
Preface Providing feedback on Red Hat documentation Red Hat appreciates your feedback on product documentation. To propose improvements, open a Jira issue and describe your suggested changes. Provide as much detail as possible to help the documentation team to address your request quickly. Prerequisite You have a Red Hat Customer Portal account. This account enables you to log in to the Red Hat Jira Software instance. If you do not have an account, you will be prompted to create one. Procedure Click the following link: Create issue . In the Summary text box, enter a brief description of the issue. In the Description text box, provide the following information: The URL of the page where you found the issue. A detailed description of the issue. You can leave the information other fields at their default values. In the Reporter field, enter your Jira user name. Click Create to submit the Jira issue to the documentation team. Thank you for taking the time to provide feedback.
null
https://docs.redhat.com/en/documentation/red_hat_connectivity_link/1.0/html/connectivity_link_observability_guide/pr01
Chapter 3. Reviewing the reports
Chapter 3. Reviewing the reports Use a browser to open the index.html file located in the report output directory. This opens a landing page that lists the applications that were processed. Each row contains a high-level overview of the story points, number of incidents, and technologies encountered in that application. Figure 3.1. Application list Note The incidents and estimated story points change as new rules are added to MTA. The values here may not match what you see when you test this application. The following table lists all of the reports and pages that can be accessed from this main MTA landing page. Click the name of the application, jee-example-app-1.0.0.ear , to view the application report. Page How to Access Application Click the name of the application. Technologies report Click the Technologies link at the top of the page. Archives shared by multiple applications Click the Archives shared by multiple applications link. Note that this link is only available when there are shared archives across multiple applications. Rule providers execution overview Click the Rule providers execution overview link at the bottom of the page. Note that if an application shares archives with other analyzed applications, you will see a breakdown of how many story points are from shared archives and how many are unique to this application. Figure 3.2. Shared archives Information about the archives that are shared among applications can be found in the Archives Shared by Multiple Applications reports. 3.1. Application report 3.1.1. Dashboard Access this report from the report landing page by clicking on the application name in the Application List . The dashboard gives an overview of the entire application migration effort. It summarizes: The incidents and story points by category The incidents and story points by level of effort of the suggested changes The incidents by package Figure 3.3. Dashboard The top navigation bar lists the various reports that contain additional details about the migration of this application. Note that only those reports that are applicable to the current application will be available. Report Description Issues Provides a concise summary of all issues that require attention. Insights Provides information about the technologies used in the application and their usage in the code. However, these Insights do not impact the migration. Application details Provides a detailed overview of all resources found within the application that may need attention during the migration. Technologies Displays all embedded libraries grouped by functionality, allowing you to quickly view the technologies used in each application. Dependencies Displays all Java-packaged dependencies found within the application. Unparsable Shows all files that MTA could not parse in the expected format. For instance, a file with a .xml or .wsdl suffix is assumed to be an XML file. If the XML parser fails, the issue is reported here and also where the individual file is listed. Remote services Displays all remote services references that were found within the application. EJBs Contains a list of EJBs found within the application. JBPM Contains all of the JBPM-related resources that were discovered during analysis. JPA Contains details on all JPA-related resources that were found in the application. Hibernate Contains details on all Hibernate-related resources that were found in the application. Server resources Displays all server resources (for example, JNDI resources) in the input application. Spring Beans Contains a list of Spring Beans found during the analysis. Hard-coded IP addresses Provides a list of all hard-coded IP addresses that were found in the application. Ignored files Lists the files found in the application that, based on certain rules and MTA configuration, were not processed. See the --userIgnorePath option for more information. About Describes the current version of MTA and provides helpful links for further assistance. 3.1.2. Issues report Access this report from the dashboard by clicking the Issues link. This report includes details about every issue that was raised by the selected migration paths. The following information is provided for each issue encountered: A title to summarize the issue. The total number of incidents, or times the issue was encountered. The rule story points to resolve a single instance of the issue. The estimated level of effort to resolve the issue. The total story points to resolve every instance encountered. This is calculated by multiplying the number of incidents found by the story points per incident. Figure 3.4. Issues report Each reported issue may be expanded, by clicking on the title, to obtain additional details. The following information is provided. A list of files where the incidents occurred, along with the number of incidents within each file. If the file is a Java source file, then clicking the filename will direct you to the corresponding Source report. A detailed description of the issue. This description outlines the problem, provides any known solutions, and references supporting documentation regarding either the issue or resolution. A direct link, entitled Show Rule , to the rule that generated the issue. Figure 3.5. Expanded issue Issues are sorted into four categories by default. Information on these categories is available at ask Category. 3.1.3. Insights Important Insights is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Previously, a violation generated by a rule with zero effort was listed as an issue in the static report. This is now listed as an insight instead. Issues are generated by general rules, whereas string tags are generated by tagging rules. String tags indicate the presence of a technology but do not show the code location. With the introduction of Insights, you can see the technology used in the application along with its usage in the code. For example, a rule searching for deprecated API usage in the code that does not impact the current migration but can be tracked and fixed when needed in the future. Unlike issues, insights do not need to be fixed for a successful migration. They are generated by any rule that doesn't have a positive effort value and category assigned. They might have a message and tag. Note Insights are generated automatically if applicable or present. Currently, MTA supports generating Insights when application anaylsis is done using CLI. You can view Insights under the Insights tab in the static report. Example: Insights generated by a tagging rule with undefined effort - customVariables: [] description: Embedded library - Apache Wicket labels: - konveyor.io/include=always links: [] ruleID: mvc-01000 tag: - Apache Wicket - Embedded library - Apache Wicket when: builtin.file: pattern: .*wicket.*\.jar Example: Insights generated by a non-tagging rule with zero effort - category: potential customVariables: [] description: RESTful Web Services @Context annotation has been deprecated effort: 0 message: Future versions of this API will no longer support `@Context` and related types such as `ContextResolver`. ruleID: jakarta-ws-rs-00001 when: java.referenced: location: ANNOTATION pattern: jakarta.ws.rs.core.Context 3.1.4. Application details report Access this report from the dashboard by clicking the Application Details link. The report lists the story points, the Java incidents by package, and a count of the occurrences of the technologies found in the application. is a display of application messages generated during the migration process. Finally, there is a breakdown of this information for each archive analyzed during the process. Figure 3.6. Application Details report Expand the jee-example-app-1.0.0.ear/jee-example-services.jar to review the story points, Java incidents by package, and a count of the occurrences of the technologies found in this archive. This summary begins with a total of the story points assigned to its migration, followed by a table detailing the changes required for each file in the archive. The report contains the following columns. Column Name Description Name The name of the file being analyzed. Technology The type of file being analyzed, for example, Decompiled Java File or Properties . Issues Warnings about areas of code that need review or changes. Story Points Level of effort required to migrate the file. Note that if an archive is duplicated several times in an application, it will be listed just once in the report and will be tagged with [Included multiple times] . Figure 3.7. Duplicate archive in an application The story points for archives that are duplicated within an application will be counted only once in the total story point count for that application. 3.1.5. Technologies report Access this report from the dashboard by clicking the Technologies link. The report lists the occurrences of technologies, grouped by function, in the analyzed application. It is an overview of the technologies found in the application, and is designed to assist users in quickly understanding each application's purpose. The image below shows the technologies used in the jee-example-app . Figure 3.8. Technologies in an application 3.1.6. Source report The Source report displays the migration issues in the context of the source file in which they were discovered. Figure 3.9. Source report 3.2. Technologies report Access this report from the report landing page by clicking the Technologies link. This report provides an aggregate listing of the technologies used, grouped by function, for the analyzed applications. It shows how the technologies are distributed, and is typically reviewed after analyzing a large number of applications to group the applications and identify patterns. It also shows the size, number of libraries, and story point totals of each application. Clicking any of the headers, such as Markup , sorts the results in descending order. Selecting the same header again will resort the results in ascending order. The currently selected header is identified in bold, to a directional arrow, indicating the direction of the sort. Figure 3.10. Technologies used across multiple applications
[ "- customVariables: [] description: Embedded library - Apache Wicket labels: - konveyor.io/include=always links: [] ruleID: mvc-01000 tag: - Apache Wicket - Embedded library - Apache Wicket when: builtin.file: pattern: .*wicket.*\\.jar", "- category: potential customVariables: [] description: RESTful Web Services @Context annotation has been deprecated effort: 0 message: Future versions of this API will no longer support `@Context` and related types such as `ContextResolver`. ruleID: jakarta-ws-rs-00001 when: java.referenced: location: ANNOTATION pattern: jakarta.ws.rs.core.Context" ]
https://docs.redhat.com/en/documentation/migration_toolkit_for_applications/7.1/html/cli_guide/review-reports_cli-guide
Chapter 3. Signing a kernel and modules for Secure Boot
Chapter 3. Signing a kernel and modules for Secure Boot You can enhance the security of your system by using a signed kernel and signed kernel modules. On UEFI-based build systems where Secure Boot is enabled, you can self-sign a privately built kernel or kernel modules. Furthermore, you can import your public key into a target system where you want to deploy your kernel or kernel modules. If Secure Boot is enabled, all of the following components have to be signed with a private key and authenticated with the corresponding public key: UEFI operating system boot loader The Red Hat Enterprise Linux kernel All kernel modules If any of these components are not signed and authenticated, the system cannot finish the booting process. RHEL 8 includes: Signed boot loaders Signed kernels Signed kernel modules In addition, the signed first-stage boot loader and the signed kernel include embedded Red Hat public keys. These signed executable binaries and embedded keys enable RHEL 8 to install, boot, and run with the Microsoft UEFI Secure Boot Certification Authority keys. These keys are provided by the UEFI firmware on systems that support UEFI Secure Boot. Note Not all UEFI-based systems include support for Secure Boot. The build system, where you build and sign your kernel module, does not need to have UEFI Secure Boot enabled and does not even need to be a UEFI-based system. 3.1. Prerequisites To be able to sign externally built kernel modules, install the utilities from the following packages: Table 3.1. Required utilities Utility Provided by package Used on Purpose efikeygen pesign Build system Generates public and private X.509 key pair openssl openssl Build system Exports the unencrypted private key sign-file kernel-devel Build system Executable file used to sign a kernel module with the private key mokutil mokutil Target system Optional utility used to manually enroll the public key keyctl keyutils Target system Optional utility used to display public keys in the system keyring 3.2. What is UEFI Secure Boot With the Unified Extensible Firmware Interface (UEFI) Secure Boot technology, you can prevent the execution of the kernel-space code that is not signed by a trusted key. The system boot loader is signed with a cryptographic key. The database of public keys in the firmware authorizes the process of signing the key. You can subsequently verify a signature in the -stage boot loader and the kernel. UEFI Secure Boot establishes a chain of trust from the firmware to the signed drivers and kernel modules as follows: An UEFI private key signs, and a public key authenticates the shim first-stage boot loader. A certificate authority (CA) in turn signs the public key. The CA is stored in the firmware database. The shim file contains the Red Hat public key Red Hat Secure Boot (CA key 1) to authenticate the GRUB boot loader and the kernel. The kernel in turn contains public keys to authenticate drivers and modules. Secure Boot is the boot path validation component of the UEFI specification. The specification defines: Programming interface for cryptographically protected UEFI variables in non-volatile storage. Storing the trusted X.509 root certificates in UEFI variables. Validation of UEFI applications such as boot loaders and drivers. Procedures to revoke known-bad certificates and application hashes. UEFI Secure Boot helps in the detection of unauthorized changes but does not : Prevent installation or removal of second-stage boot loaders. Require explicit user confirmation of such changes. Stop boot path manipulations. Signatures are verified during booting but, not when the boot loader is installed or updated. If the boot loader or the kernel are not signed by a system trusted key, Secure Boot prevents them from starting. 3.3. UEFI Secure Boot support You can install and run RHEL 8 on systems with enabled UEFI Secure Boot if the kernel and all the loaded drivers are signed with a trusted key. Red Hat provides kernels and drivers that are signed and authenticated by the relevant Red Hat keys. If you want to load externally built kernels or drivers, you must sign them as well. Restrictions imposed by UEFI Secure Boot The system only runs the kernel-mode code after its signature has been properly authenticated. GRUB module loading is disabled because there is no infrastructure for signing and verification of GRUB modules. Allowing module loading would run untrusted code within the security perimeter defined by Secure Boot. Red Hat provides a signed GRUB binary that has all supported modules on RHEL 8. Additional resources Restrictions Imposed by UEFI Secure Boot 3.4. Requirements for authenticating kernel modules with X.509 keys In RHEL 8, when a kernel module is loaded, the kernel checks the signature of the module against the public X.509 keys from the kernel system keyring ( .builtin_trusted_keys ) and the kernel platform keyring ( .platform ). The .platform keyring provides keys from third-party platform providers and custom public keys. The keys from the kernel system .blacklist keyring are excluded from verification. You need to meet certain conditions to load kernel modules on systems with enabled UEFI Secure Boot functionality: If UEFI Secure Boot is enabled or if the module.sig_enforce kernel parameter has been specified: You can only load those signed kernel modules whose signatures were authenticated against keys from the system keyring ( .builtin_trusted_keys ) and the platform keyring ( .platform ). The public key must not be on the system revoked keys keyring ( .blacklist ). If UEFI Secure Boot is disabled and the module.sig_enforce kernel parameter has not been specified: You can load unsigned kernel modules and signed kernel modules without a public key. If the system is not UEFI-based or if UEFI Secure Boot is disabled: Only the keys embedded in the kernel are loaded onto .builtin_trusted_keys and .platform . You have no ability to augment that set of keys without rebuilding the kernel. Table 3.2. Kernel module authentication requirements for loading Module signed Public key found and signature valid UEFI Secure Boot state sig_enforce Module load Kernel tainted Unsigned - Not enabled Not enabled Succeeds Yes Not enabled Enabled Fails - Enabled - Fails - Signed No Not enabled Not enabled Succeeds Yes Not enabled Enabled Fails - Enabled - Fails - Signed Yes Not enabled Not enabled Succeeds No Not enabled Enabled Succeeds No Enabled - Succeeds No 3.5. Sources for public keys During boot, the kernel loads X.509 keys from a set of persistent key stores into the following keyrings: The system keyring ( .builtin_trusted_keys ) The .platform keyring The system .blacklist keyring Table 3.3. Sources for system keyrings Source of X.509 keys User can add keys UEFI Secure Boot state Keys loaded during boot Embedded in kernel No - .builtin_trusted_keys UEFI db Limited Not enabled No Enabled .platform Embedded in the shim boot loader No Not enabled No Enabled .platform Machine Owner Key (MOK) list Yes Not enabled No Enabled .platform .builtin_trusted_keys A keyring that is built on boot. Provides trusted public keys. root privileges are required to view the keys. .platform A keyring that is built on boot. Provides keys from third-party platform providers and custom public keys. root privileges are required to view the keys. .blacklist A keyring with X.509 keys which have been revoked. A module signed by a key from .blacklist will fail authentication even if your public key is in .builtin_trusted_keys . UEFI Secure Boot db A signature database. Stores keys (hashes) of UEFI applications, UEFI drivers, and boot loaders. The keys can be loaded on the machine. UEFI Secure Boot dbx A revoked signature database. Prevents keys from getting loaded. The revoked keys from this database are added to the .blacklist keyring. 3.6. Generating a public and private key pair To use a custom kernel or custom kernel modules on a Secure Boot-enabled system, you must generate a public and private X.509 key pair. You can use the generated private key to sign the kernel or the kernel modules. You can also validate the signed kernel or kernel modules by adding the corresponding public key to the Machine Owner Key (MOK) for Secure Boot. Warning Apply strong security measures and access policies to guard the contents of your private key. In the wrong hands, the key could be used to compromise any system which is authenticated by the corresponding public key. Procedure Create an X.509 public and private key pair: If you only want to sign custom kernel modules : If you want to sign custom kernel : When the RHEL system is running FIPS mode: Note In FIPS mode, you must use the --token option so that efikeygen finds the default "NSS Certificate DB" token in the PKI database. The public and private keys are now stored in the /etc/pki/pesign/ directory. Important It is a good security practice to sign the kernel and the kernel modules within the validity period of its signing key. However, the sign-file utility does not warn you and the key will be usable in RHEL 8 regardless of the validity dates. Additional resources openssl(1) manual page RHEL Security Guide Enrolling public key on target system by adding the public key to the MOK list 3.7. Example output of system keyrings You can display information about the keys on the system keyrings using the keyctl utility from the keyutils package. Prerequisites You have root permissions. You have installed the keyctl utility from the keyutils package. Example 3.1. Keyrings output The following is a shortened example output of .builtin_trusted_keys , .platform , and .blacklist keyrings from a RHEL 8 system where UEFI Secure Boot is enabled. The .builtin_trusted_keys keyring in the example shows the addition of two keys from the UEFI Secure Boot db keys as well as the Red Hat Secure Boot (CA key 1) , which is embedded in the shim boot loader. Example 3.2. Kernel console output The following example shows the kernel console output. The messages identify the keys with an UEFI Secure Boot related source. These include UEFI Secure Boot db , embedded shim , and MOK list. Additional resources keyctl(1) , dmesg(1) manual pages 3.8. Enrolling public key on target system by adding the public key to the MOK list You must authenticate your public key on a system for kernel or kernel module access and enroll it in the platform keyring ( .platform ) of the target system. When RHEL 8 boots on a UEFI-based system with Secure Boot enabled, the kernel imports public keys from the db key database and excludes revoked keys from the dbx database. The Machine Owner Key (MOK) facility allows expanding the UEFI Secure Boot key database. When booting RHEL 8 on UEFI-enabled systems with Secure Boot enabled, keys on the MOK list are added to the platform keyring ( .platform ), along with the keys from the Secure Boot database. The list of MOK keys is stored securely and persistently in the same way, but it is a separate facility from the Secure Boot databases. The MOK facility is supported by shim , MokManager , GRUB , and the mokutil utility that enables secure key management and authentication for UEFI-based systems. Note To get the authentication service of your kernel module on your systems, consider requesting your system vendor to incorporate your public key into the UEFI Secure Boot key database in their factory firmware image. Prerequisites You have generated a public and private key pair and know the validity dates of your public keys. For details, see Generating a public and private key pair . Procedure Export your public key to the sb_cert.cer file: Import your public key into the MOK list: Enter a new password for this MOK enrollment request. Reboot the machine. The shim boot loader notices the pending MOK key enrollment request and it launches MokManager.efi to enable you to complete the enrollment from the UEFI console. Choose Enroll MOK , enter the password you previously associated with this request when prompted, and confirm the enrollment. Your public key is added to the MOK list, which is persistent. Once a key is on the MOK list, it will be automatically propagated to the .platform keyring on this and subsequent boots when UEFI Secure Boot is enabled. 3.9. Signing a kernel with the private key You can obtain enhanced security benefits on your system by loading a signed kernel if the UEFI Secure Boot mechanism is enabled. Prerequisites You have generated a public and private key pair and know the validity dates of your public keys. For details, see Generating a public and private key pair . You have enrolled your public key on the target system. For details, see Enrolling public key on target system by adding the public key to the MOK list . You have a kernel image in the ELF format available for signing. Procedure On the x64 architecture: Create a signed image: Replace version with the version suffix of your vmlinuz file, and Custom Secure Boot key with the name that you chose earlier. Optional: Check the signatures: Overwrite the unsigned image with the signed image: On the 64-bit ARM architecture: Decompress the vmlinuz file: Create a signed image: Optional: Check the signatures: Compress the vmlinux file: Remove the uncompressed vmlinux file: 3.10. Signing a GRUB build with the private key On a system where the UEFI Secure Boot mechanism is enabled, you can sign a GRUB build with a custom existing private key. You must do this if you are using a custom GRUB build, or if you have removed the Microsoft trust anchor from your system. Prerequisites You have generated a public and private key pair and know the validity dates of your public keys. For details, see Generating a public and private key pair . You have enrolled your public key on the target system. For details, see Enrolling public key on target system by adding the public key to the MOK list . You have a GRUB EFI binary available for signing. Procedure On the x64 architecture: Create a signed GRUB EFI binary: Replace Custom Secure Boot key with the name that you chose earlier. Optional: Check the signatures: Overwrite the unsigned binary with the signed binary: On the 64-bit ARM architecture: Create a signed GRUB EFI binary: Replace Custom Secure Boot key with the name that you chose earlier. Optional: Check the signatures: Overwrite the unsigned binary with the signed binary: 3.11. Signing kernel modules with the private key You can enhance the security of your system by loading signed kernel modules if the UEFI Secure Boot mechanism is enabled. Your signed kernel module is also loadable on systems where UEFI Secure Boot is disabled or on a non-UEFI system. As a result, you do not need to provide both, a signed and unsigned version of your kernel module. Prerequisites You have generated a public and private key pair and know the validity dates of your public keys. For details, see Generating a public and private key pair . You have enrolled your public key on the target system. For details, see Enrolling public key on target system by adding the public key to the MOK list . You have a kernel module in ELF image format available for signing. Procedure Export your public key to the sb_cert.cer file: Extract the key from the NSS database as a PKCS #12 file: When the command prompts, enter a new password that encrypts the private key. Export the unencrypted private key: Important Keep the unencrypted private key secure. Sign your kernel module. The following command appends the signature directly to the ELF image in your kernel module file: Your kernel module is now ready for loading. Important In RHEL 8, the validity dates of the key pair matter. The key does not expire, but the kernel module must be signed within the validity period of its signing key. The sign-file utility will not warn you of this. For example, a key that is only valid in 2019 can be used to authenticate a kernel module signed in 2019 with that key. However, users cannot use that key to sign a kernel module in 2020. Verification Display information about the kernel module's signature: Check that the signature lists your name as entered during generation. Note The appended signature is not contained in an ELF image section and is not a formal part of the ELF image. Therefore, utilities such as readelf cannot display the signature on your kernel module. Load the module: Remove (unload) the module: Additional resources Displaying information about kernel modules 3.12. Loading signed kernel modules After enrolling your public key in the system keyring ( .builtin_trusted_keys ) and the MOK list, and signing kernel modules with your private key, you can load them using the modprobe command. Prerequisites You have generated the public and private key pair. For details, see Generating a public and private key pair . You have enrolled the public key into the system keyring. For details, see Enrolling public key on target system by adding the public key to the MOK list . You have signed a kernel module with the private key. For details, see Signing kernel modules with the private key . Install the kernel-modules-extra package, which creates the /lib/modules/USD(uname -r)/extra/ directory: Procedure Verify that your public keys are on the system keyring: Copy the kernel module into the extra/ directory of the kernel that you want: Update the modular dependency list: Load the kernel module: Optional: To load the module on boot, add it to the /etc/modules-loaded.d/ my_module .conf file: Verification Verify that the module was successfully loaded: Additional resources Managing kernel modules
[ "yum install pesign openssl kernel-devel mokutil keyutils", "efikeygen --dbdir /etc/pki/pesign --self-sign --module --common-name 'CN= Organization signing key ' --nickname ' Custom Secure Boot key '", "efikeygen --dbdir /etc/pki/pesign --self-sign --kernel --common-name 'CN= Organization signing key ' --nickname ' Custom Secure Boot key '", "efikeygen --dbdir /etc/pki/pesign --self-sign --kernel --common-name 'CN= Organization signing key ' --nickname ' Custom Secure Boot key ' --token 'NSS FIPS 140-2 Certificate DB'", "keyctl list %:.builtin_trusted_keys 6 keys in keyring: ...asymmetric: Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87 ...asymmetric: Red Hat Secure Boot (CA key 1): 4016841644ce3a810408050766e8f8a29 ...asymmetric: Microsoft Corporation UEFI CA 2011: 13adbf4309bd82709c8cd54f316ed ...asymmetric: Microsoft Windows Production PCA 2011: a92902398e16c49778cd90f99e ...asymmetric: Red Hat Enterprise Linux kernel signing key: 4249689eefc77e95880b ...asymmetric: Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b7 keyctl list %:.platform 4 keys in keyring: ...asymmetric: VMware, Inc.: 4ad8da0472073 ...asymmetric: Red Hat Secure Boot CA 5: cc6fafe72 ...asymmetric: Microsoft Windows Production PCA 2011: a929f298e1 ...asymmetric: Microsoft Corporation UEFI CA 2011: 13adbf4e0bd82 keyctl list %:.blacklist 4 keys in keyring: ...blacklist: bin:f5ff83a ...blacklist: bin:0dfdbec ...blacklist: bin:38f1d22 ...blacklist: bin:51f831f", "dmesg | egrep 'integrity.*cert' [1.512966] integrity: Loading X.509 certificate: UEFI:db [1.513027] integrity: Loaded X.509 cert 'Microsoft Windows Production PCA 2011: a929023 [1.513028] integrity: Loading X.509 certificate: UEFI:db [1.513057] integrity: Loaded X.509 cert 'Microsoft Corporation UEFI CA 2011: 13adbf4309 [1.513298] integrity: Loading X.509 certificate: UEFI:MokListRT (MOKvar table) [1.513549] integrity: Loaded X.509 cert 'Red Hat Secure Boot CA 5: cc6fa5e72868ba494e93", "certutil -d /etc/pki/pesign -n ' Custom Secure Boot key ' -Lr > sb_cert.cer", "mokutil --import sb_cert.cer", "pesign --certificate ' Custom Secure Boot key ' --in vmlinuz- version --sign --out vmlinuz- version .signed", "pesign --show-signature --in vmlinuz- version .signed", "mv vmlinuz- version .signed vmlinuz- version", "zcat vmlinuz- version > vmlinux- version", "pesign --certificate ' Custom Secure Boot key ' --in vmlinux- version --sign --out vmlinux- version .signed", "pesign --show-signature --in vmlinux- version .signed", "gzip --to-stdout vmlinux- version .signed > vmlinuz- version", "rm vmlinux- version *", "pesign --in /boot/efi/EFI/redhat/grubx64.efi --out /boot/efi/EFI/redhat/grubx64.efi.signed --certificate ' Custom Secure Boot key ' --sign", "pesign --in /boot/efi/EFI/redhat/grubx64.efi.signed --show-signature", "mv /boot/efi/EFI/redhat/grubx64.efi.signed /boot/efi/EFI/redhat/grubx64.efi", "pesign --in /boot/efi/EFI/redhat/grubaa64.efi --out /boot/efi/EFI/redhat/grubaa64.efi.signed --certificate ' Custom Secure Boot key ' --sign", "pesign --in /boot/efi/EFI/redhat/grubaa64.efi.signed --show-signature", "mv /boot/efi/EFI/redhat/grubaa64.efi.signed /boot/efi/EFI/redhat/grubaa64.efi", "certutil -d /etc/pki/pesign -n ' Custom Secure Boot key ' -Lr > sb_cert.cer", "pk12util -o sb_cert.p12 -n ' Custom Secure Boot key ' -d /etc/pki/pesign", "openssl pkcs12 -in sb_cert.p12 -out sb_cert.priv -nocerts -nodes", "/usr/src/kernels/USD(uname -r)/scripts/sign-file sha256 sb_cert.priv sb_cert.cer my_module .ko", "modinfo my_module .ko | grep signer signer: Your Name Key", "insmod my_module .ko", "modprobe -r my_module .ko", "yum -y install kernel-modules-extra", "keyctl list %:.platform", "cp my_module .ko /lib/modules/USD(uname -r)/extra/", "depmod -a", "modprobe -v my_module", "echo \" my_module \" > /etc/modules-load.d/ my_module .conf", "lsmod | grep my_module" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_monitoring_and_updating_the_kernel/signing-a-kernel-and-modules-for-secure-boot_managing-monitoring-and-updating-the-kernel
Chapter 3. Managing user-owned OAuth access tokens
Chapter 3. Managing user-owned OAuth access tokens Users can review their own OAuth access tokens and delete any that are no longer needed. 3.1. Listing user-owned OAuth access tokens You can list your user-owned OAuth access tokens. Token names are not sensitive and cannot be used to log in. Procedure List all user-owned OAuth access tokens: USD oc get useroauthaccesstokens Example output NAME CLIENT NAME CREATED EXPIRES REDIRECT URI SCOPES <token1> openshift-challenging-client 2021-01-11T19:25:35Z 2021-01-12 19:25:35 +0000 UTC https://oauth-openshift.apps.example.com/oauth/token/implicit user:full <token2> openshift-browser-client 2021-01-11T19:27:06Z 2021-01-12 19:27:06 +0000 UTC https://oauth-openshift.apps.example.com/oauth/token/display user:full <token3> console 2021-01-11T19:26:29Z 2021-01-12 19:26:29 +0000 UTC https://console-openshift-console.apps.example.com/auth/callback user:full List user-owned OAuth access tokens for a particular OAuth client: USD oc get useroauthaccesstokens --field-selector=clientName="console" Example output NAME CLIENT NAME CREATED EXPIRES REDIRECT URI SCOPES <token3> console 2021-01-11T19:26:29Z 2021-01-12 19:26:29 +0000 UTC https://console-openshift-console.apps.example.com/auth/callback user:full 3.2. Viewing the details of a user-owned OAuth access token You can view the details of a user-owned OAuth access token. Procedure Describe the details of a user-owned OAuth access token: USD oc describe useroauthaccesstokens <token_name> Example output Name: <token_name> 1 Namespace: Labels: <none> Annotations: <none> API Version: oauth.openshift.io/v1 Authorize Token: sha256~Ksckkug-9Fg_RWn_AUysPoIg-_HqmFI9zUL_CgD8wr8 Client Name: openshift-browser-client 2 Expires In: 86400 3 Inactivity Timeout Seconds: 317 4 Kind: UserOAuthAccessToken Metadata: Creation Timestamp: 2021-01-11T19:27:06Z Managed Fields: API Version: oauth.openshift.io/v1 Fields Type: FieldsV1 fieldsV1: f:authorizeToken: f:clientName: f:expiresIn: f:redirectURI: f:scopes: f:userName: f:userUID: Manager: oauth-server Operation: Update Time: 2021-01-11T19:27:06Z Resource Version: 30535 Self Link: /apis/oauth.openshift.io/v1/useroauthaccesstokens/<token_name> UID: f9d00b67-ab65-489b-8080-e427fa3c6181 Redirect URI: https://oauth-openshift.apps.example.com/oauth/token/display Scopes: user:full 5 User Name: <user_name> 6 User UID: 82356ab0-95f9-4fb3-9bc0-10f1d6a6a345 Events: <none> 1 The token name, which is the sha256 hash of the token. Token names are not sensitive and cannot be used to log in. 2 The client name, which describes where the token originated from. 3 The value in seconds from the creation time before this token expires. 4 If there is a token inactivity timeout set for the OAuth server, this is the value in seconds from the creation time before this token can no longer be used. 5 The scopes for this token. 6 The user name associated with this token. 3.3. Deleting user-owned OAuth access tokens The oc logout command only invalidates the OAuth token for the active session. You can use the following procedure to delete any user-owned OAuth tokens that are no longer needed. Deleting an OAuth access token logs out the user from all sessions that use the token. Procedure Delete the user-owned OAuth access token: USD oc delete useroauthaccesstokens <token_name> Example output useroauthaccesstoken.oauth.openshift.io "<token_name>" deleted 3.4. Adding unauthenticated groups to cluster roles As a cluster administrator, you can add unauthenticated users to the following cluster roles in OpenShift Dedicated by creating a cluster role binding. Unauthenticated users do not have access to non-public cluster roles. This should only be done in specific use cases when necessary. You can add unauthenticated users to the following cluster roles: system:scope-impersonation system:webhook system:oauth-token-deleter self-access-reviewer Important Always verify compliance with your organization's security standards when modifying unauthenticated access. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Create a YAML file named add-<cluster_role>-unauth.yaml and add the following content: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" name: <cluster_role>access-unauthenticated roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: <cluster_role> subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:unauthenticated Apply the configuration by running the following command: USD oc apply -f add-<cluster_role>.yaml
[ "oc get useroauthaccesstokens", "NAME CLIENT NAME CREATED EXPIRES REDIRECT URI SCOPES <token1> openshift-challenging-client 2021-01-11T19:25:35Z 2021-01-12 19:25:35 +0000 UTC https://oauth-openshift.apps.example.com/oauth/token/implicit user:full <token2> openshift-browser-client 2021-01-11T19:27:06Z 2021-01-12 19:27:06 +0000 UTC https://oauth-openshift.apps.example.com/oauth/token/display user:full <token3> console 2021-01-11T19:26:29Z 2021-01-12 19:26:29 +0000 UTC https://console-openshift-console.apps.example.com/auth/callback user:full", "oc get useroauthaccesstokens --field-selector=clientName=\"console\"", "NAME CLIENT NAME CREATED EXPIRES REDIRECT URI SCOPES <token3> console 2021-01-11T19:26:29Z 2021-01-12 19:26:29 +0000 UTC https://console-openshift-console.apps.example.com/auth/callback user:full", "oc describe useroauthaccesstokens <token_name>", "Name: <token_name> 1 Namespace: Labels: <none> Annotations: <none> API Version: oauth.openshift.io/v1 Authorize Token: sha256~Ksckkug-9Fg_RWn_AUysPoIg-_HqmFI9zUL_CgD8wr8 Client Name: openshift-browser-client 2 Expires In: 86400 3 Inactivity Timeout Seconds: 317 4 Kind: UserOAuthAccessToken Metadata: Creation Timestamp: 2021-01-11T19:27:06Z Managed Fields: API Version: oauth.openshift.io/v1 Fields Type: FieldsV1 fieldsV1: f:authorizeToken: f:clientName: f:expiresIn: f:redirectURI: f:scopes: f:userName: f:userUID: Manager: oauth-server Operation: Update Time: 2021-01-11T19:27:06Z Resource Version: 30535 Self Link: /apis/oauth.openshift.io/v1/useroauthaccesstokens/<token_name> UID: f9d00b67-ab65-489b-8080-e427fa3c6181 Redirect URI: https://oauth-openshift.apps.example.com/oauth/token/display Scopes: user:full 5 User Name: <user_name> 6 User UID: 82356ab0-95f9-4fb3-9bc0-10f1d6a6a345 Events: <none>", "oc delete useroauthaccesstokens <token_name>", "useroauthaccesstoken.oauth.openshift.io \"<token_name>\" deleted", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: \"true\" name: <cluster_role>access-unauthenticated roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: <cluster_role> subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:unauthenticated", "oc apply -f add-<cluster_role>.yaml" ]
https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/authentication_and_authorization/managing-oauth-access-tokens
Chapter 8. Reference materials
Chapter 8. Reference materials To learn more about the compliance service, see the following resources: Assessing and Monitoring Security Policy Compliance of RHEL Systems Red Hat Insights for Red Hat Enterprise Linux Documentation Red Hat Insights for Red Hat Enterprise Linux Product Support page
null
https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/generating_compliance_service_reports/assembly-compl-reference-materials
Chapter 6. Using director to configure security hardening
Chapter 6. Using director to configure security hardening This chapter describes how director can apply security hardening values as part of the deployment process. Note When running openstack overcloud deploy , remember that you will always need to include all necessary environment files needed to deploy the overcloud, in addition to any changes you want to make. 6.1. Use SSH banner text You can set a banner that displays a console message to all users that connect over SSH. You can add banner text to /etc/issue using the following parameters in an environment file. Consider customizing this sample text to suit your requirements. To apply this change to your deployment, save the settings as a file called ssh_banner.yaml , and then pass it to the overcloud deploy command as follows. The <full environment> indicates that you must still include all of your original deployment parameters. For example: 6.2. Audit for system events Maintaining a record of all audit events helps you establish a system baseline, perform troubleshooting, or analyze the sequence of events that led to a certain outcome. The audit system is capable of logging many types of events, such as changes to the system time, changes to Mandatory/Discretionary Access Control, and creating/deleting users or groups. Rules can be created using an environment file, which are then injected by director into /etc/audit/audit.rules . For example: 6.3. Manage firewall rules Firewall rules are automatically applied on overcloud nodes during deployment, and are intended to only expose the ports required to get OpenStack working. You can specify additional firewall rules as needed. For example, to add rules for a Zabbix monitoring system: You can also add rules that restrict access. The number used during rule definition will determine the rule's precedence. For example, RabbitMQ's rule number is 109 by default. If you want to restrain it, you switch it to use a lower value: In this example, 098 and 099 are arbitrarily chosen numbers that are lower than RabbitMQ's rule number 109 . To determine a rule's number, you can inspect the iptables rule on the appropriate node; for RabbitMQ, you would check the controller: Alternatively, you can extract the port requirements from the puppet definition. For example, RabbitMQ's rules are stored in puppet/services/rabbitmq.yaml : The following parameters can be set for a rule: port : The port associated to the rule. Deprecated by puppetlabs-firewall . dport : The destination port associated to the rule. sport : The source port associated to the rule. proto : The protocol associated to the rule. Defaults to tcp action : The action policy associated to the rule. Defaults to accept jump : The chain to jump to. state : Array of states associated to the rule. Default to [ NEW ] source : The source IP address associated to the rule. iniface : The network interface associated to the rule. chain : The chain associated to the rule. Default to INPUT destination : The destination cidr associated to the rule. extras : Hash of any additional parameters supported by the puppetlabs-firewall module. 6.4. Intrusion detection with AIDE AIDE (Advanced Intrusion Detection Environment) is a file and directory integrity checker. It is used to detect incidents of unauthorized file tampering or changes. For example, AIDE can alert you if system password files are changed. AIDE works by analyzing system files and then compiling an integrity database of file hashes. The database then serves as a comparison point to verify the integrity of the files and directories and detect changes. The director includes the AIDE service, allowing you to add entries into an AIDE configuration, which is then used by the AIDE service to create an integrity database. For example: Note The above example is not actively maintained or benchmarked, so you should select the AIDE values that suit your requirements. An alias named TripleORules is declared to avoid having to repeatedly out the same attributes each time. The alias receives the attributes of p+sha256 . In AIDE terms, this reads as the following instruction: monitor all file permissions p with an integrity checksum of sha256 . For a complete list of attributes available for AIDE's config files, see the AIDE MAN page at https://aide.github.io/ . To apply this change to your deployment, save the settings as a file called aide.yaml , and then pass it to the overcloud deploy command as follows. The <full environment> indicates that you must still include all of your original deployment parameters. For example: 6.4.1. Using complex AIDE rules Complex rules can be created using the format described previously. For example: The above would translate as the following instruction: monitor permissions, inodes, number of links, user, group, size, block count, mtime, ctime, using sha256 for checksum generation. Note, the alias should always have an order position of 1 , which means that it is positioned at the top of the AIDE rules and is applied recursively to all values below. Following after the alias are the directories to monitor. Note that regular expressions can be used. For example we set monitoring for the var directory, but overwrite with a not clause using ! with '!/var/log.*' and '!/var/spool.*' . 6.4.2. Additional AIDE values The following AIDE values are also available: AideConfPath : The full POSIX path to the aide configuration file, this defaults to /etc/aide.conf . If no requirement is in place to change the file location, it is recommended to stick with the default path. AideDBPath : The full POSIX path to the AIDE integrity database. This value is configurable to allow operators to declare their own full path, as often AIDE database files are stored off node perhaps on a read only file mount. AideDBTempPath : The full POSIX path to the AIDE integrity temporary database. This temporary files is created when AIDE initializes a new database. AideHour : This value is to set the hour attribute as part of AIDE cron configuration. AideMinute : This value is to set the minute attribute as part of AIDE cron configuration. AideCronUser : This value is to set the linux user as part of AIDE cron configuration. AideEmail : This value sets the email address that receives AIDE reports each time a cron run is made. AideMuaPath : This value sets the path to the Mail User Agent that is used to send AIDE reports to the email address set within AideEmail . 6.4.3. Cron configuration for AIDE The AIDE director service allows you to configure a cron job. By default, it will send reports to /var/log/audit/ ; if you want to use email alerts, then enable the AideEmail parameter to send the alerts to the configured email address. Note that a reliance on email for critical alerts can be vulnerable to system outages and unintentional message filtering. 6.4.4. Considering the effect of system upgrades When an upgrade is performed, the AIDE service will automatically regenerate a new integrity database to ensure all upgraded files are correctly recomputed to possess an updated checksum. If openstack overcloud deploy is called as a subsequent run to an initial deployment, and the AIDE configuration rules are changed, the director AIDE service will rebuild the database to ensure the new config attributes are encapsulated in the integrity database. 6.5. Review SecureTTY SecureTTY allows you to disable root access for any console device (tty). This behavior is managed by entries in the /etc/securetty file. For example: 6.6. CADF auditing for Identity Service A thorough auditing process can help you review the ongoing security posture of your OpenStack deployment. This is especially important for keystone, due to its role in the security model. Red Hat OpenStack Platform has adopted Cloud Auditing Data Federation (CADF) as the data format for audit events, with the keystone service generating CADF events for Identity and Token operations. You can enable CADF auditing for keystone using KeystoneNotificationFormat : 6.7. Review the login.defs values To enforce password requirements for new system users (non-keystone), director can add entries to /etc/login.defs by following these example parameters:
[ "resource_registry: OS::TripleO::Services::Sshd: ../puppet/services/sshd.yaml parameter_defaults: BannerText: | ****************************************************************** * This system is for the use of authorized users only. Usage of * * this system may be monitored and recorded by system personnel. * * Anyone using this system expressly consents to such monitoring * * and is advised that if such monitoring reveals possible * * evidence of criminal activity, system personnel may provide * * the evidence from such monitoring to law enforcement officials.* ******************************************************************", "openstack overcloud deploy --templates -e <full environment> -e ssh_banner.yaml", "resource_registry: OS::Tripleo::Services::AuditD: /usr/share/openstack-tripleo-heat-templates/deployment/auditd/auditd-baremetal-puppet.yaml parameter_defaults: AuditdRules: 'Record Events that Modify User/Group Information': content: '-w /etc/group -p wa -k audit_rules_usergroup_modification' order : 1 'Collects System Administrator Actions': content: '-w /etc/sudoers -p wa -k actions' order : 2 'Record Events that Modify the Systems Mandatory Access Controls': content: '-w /etc/selinux/ -p wa -k MAC-policy' order : 3", "parameter_defaults: ControllerExtraConfig: tripleo::firewall::firewall_rules: '301 allow zabbix': dport: 10050 proto: tcp source: 10.0.0.8 action: accept", "parameter_defaults: ControllerExtraConfig: tripleo::firewall::firewall_rules: '098 allow rabbit from internalapi network': dport: [4369,5672,25672] proto: tcp source: 10.0.0.0/24 action: accept '099 drop other rabbit access': dport: [4369,5672,25672] proto: tcp action: drop", "iptables-save [...] -A INPUT -p tcp -m multiport --dports 4369,5672,25672 -m comment --comment \"109 rabbitmq\" -m state --state NEW -j ACCEPT", "tripleo.rabbitmq.firewall_rules: '109 rabbitmq': dport: - 4369 - 5672 - 25672", "resource_registry: OS::TripleO::Services::Aide: ../puppet/services/aide.yaml parameter_defaults: AideRules: 'TripleORules': content: 'TripleORules = p+sha256' order: 1 'etc': content: '/etc/ TripleORules' order: 2 'boot': content: '/boot/ TripleORules' order: 3 'sbin': content: '/sbin/ TripleORules' order: 4 'var': content: '/var/ TripleORules' order: 5 'not var/log': content: '!/var/log.*' order: 6 'not var/spool': content: '!/var/spool.*' order: 7 'not nova instances': content: '!/var/lib/nova/instances.*' order: 8", "openstack overcloud deploy --templates -e <full environment> /usr/share/openstack-tripleo-heat-templates/environments/aide.yaml", "MyAlias = p+i+n+u+g+s+b+m+c+sha512", "resource_registry: OS::TripleO::Services::Securetty: ../puppet/services/securetty.yaml parameter_defaults: TtyValues: - console - tty1 - tty2 - tty3 - tty4 - tty5 - tty6", "parameter_defaults: KeystoneNotificationFormat: cadf", "resource_registry: OS::TripleO::Services::LoginDefs: ../puppet/services/login-defs.yaml parameter_defaults: PasswordMaxDays: 60 PasswordMinDays: 1 PasswordMinLen: 5 PasswordWarnAge: 7 FailDelay: 4" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/security_and_hardening_guide/using_director_to_configure_security_hardening
Chapter 44. Maven Tooling Reference
Chapter 44. Maven Tooling Reference 44.1. Plug-in Setup Abstract before you can use the Apache CXF plug-ins, you must first add the proper dependencies and repositories to your POM. Dependencies You need to add the following dependencies to your project's POM: the JAX-WS frontend the HTTP transport the Undertow transport 44.2. cxf-codegen-plugin Abstract Generates JAX-WS compliant Java code from a WSDL document Overview Basic example The following POM extract shows a simple example of how to configure the Maven cxf-codegen-plugin to process the myService.wsdl WSDL file: Basic configuration settings In the preceding example, you can customize the following configuration settings configuration/sourceRoot Specifies the directory where the generated Java files will be stored. Default is target/generated-sources/cxf . configuration/wsdlOptions/wsdlOption/wsdl Specifies the location of the WSDL file. Description The wsdl2java task takes a WSDL document and generates fully annotated Java code from which to implement a service. The WSDL document must have a valid portType element, but it does not need to contain a binding element or a service element. Using the optional arguments you can customize the generated code. WSDL options At least one wsdlOptions element is required to configure the plug-in. The wsdlOptions element's wsdl child is required and specifies a WSDL document to be processed by the plug-in. In addition to the wsdl element, the wsdlOptions element can take a number of children that can customize how the WSDL document is processed. More than one wsdlOptions element can be listed in the plug-in configuration. Each element configures a single WSDL document for processing. Default options The defaultOptions element is an optional element. It can be used to set options that are used across all of the specified WSDL documents. Important If an option is duplicated in the wsdlOptions element, the value in the wsdlOptions element takes precedent. Specifying code generation options To specify generic code generation options (corresponding to the switches supported by the Apache CXF wsdl2java command-line tool), you can add the extraargs element as a child of a wsdlOption element. For example, you can add the -impl option and the -verbose option as follows: If a switch takes arguments, you can specify these using subsequent extraarg elements. For example, to specify the jibx data binding, you can configure the plug-in as follows: Specifying binding files To specify the location of one or more JAX-WS binding files, you can add the bindingFiles element as a child of wsdlOption -for example: Generating code for a specific WSDL service To specify the name of the WSDL service for which code is to be generated, you can add the serviceName element as a child of wsdlOption (the default behaviour is to generate code for every service in the WSDL document)-for example: Generating code for multiple WSDL files To generate code for multiple WSDL files, simply insert additional wsdlOption elements for the WSDL files. If you want to specify some common options that apply to all of the WSDL files, put the common options into the defaultOptions element as shown: It is also possible to specify multiple WSDL files using wildcard matching. In this case, specify the directory containing the WSDL files using the wsdlRoot element and then select the required WSDL files using an include element, which supports wildcarding with the * character. For example, to select all of the WSDL files ending in Service.wsdl from the src/main/resources/wsdl root directory, you could configure the plug-in as follows: Downloading WSDL from a Maven repository To download a WSDL file directly from a Maven repository, add a wsdlArtifact element as a child of the wsdlOption element and specify the coordinates of the Maven artifact, as follows: Encoding (Requires JAXB 2.2) To specify the character encoding (Charset) used for the generated Java files, add an encoding element as a child of the configuration element, as follows: Forking a separate process You can configure the codegen plug-in to fork a separate JVM for code generation, by adding the fork element as a child of the configuration element. The fork element can be set to one of the following values: once Fork a single new JVM to process all of the WSDL files specified in the codegen plug-in's configuration. always Fork a new JVM to process each WSDL file specified in the codegen plug-in's configuration. false (Default) Disables forking. If the codegen plug-in is configured to fork a separate JVM (that is, the fork option is set to a non-false value), you can specify additional JVM arguments to the forked JVM through the additionalJvmArgs element. For example, the following fragment configures the codegen plug-in to fork a single JVM, which is restricted to access XML schemas from the local file system only (by setting the javax.xml.accessExternalSchema system property): Options reference The options used to manage the code generation process are reviewed in the following table. Option Interpretation -fe|-frontend frontend Specifies the front end used by the code generator. Possible values are jaxws , jaxws21 , and cxf . The jaxws21 frontend is used to generate JAX-WS 2.1 compliant code. The cxf frontend, which can optionally be used instead of the jaxws frontend, provides an extra constructor for Service classes. This constructor conveniently enables you to specify the Bus instance for configuring the service. Default is jaxws . -db|-databinding databinding Specifies the data binding used by the code generator. Possible values are: jaxb , xmlbeans , sdo ( sdo-static and sdo-dynamic ), and jibx . Default is jaxb . -wv wsdlVersion Specifies the WSDL version expected by the tool. Default is 1.1 . [a] -p wsdlNamespace = PackageName Specifies zero, or more, package names to use for the generated code. Optionally specifies the WSDL namespace to package name mapping. -b bindingName Specifies one or more JAXWS or JAXB binding files. Use a separate -b flag for each binding file. -sn serviceName Specifies the name of the WSDL service for which code is to be generated. The default is to generate code for every service in the WSDL document. -reserveClass classname Used with -autoNameResolution , defines a class names for wsdl-to-java not to use when generating classes. Use this option multiple times for multiple classes. -catalog catalogUrl Specifies the URL of an XML catalog to use for resolving imported schemas and WSDL documents. -d output-directory Specifies the directory into which the generated code files are written. -compile Compiles generated Java files. -classdir complile-class-dir Specifies the directory into which the compiled class files are written. -clientjar jar-file-name Generates the JAR file that contains all the client classes and the WSDL. The specified wsdlLocation does not work when this option is specified. -client Generates starting point code for a client mainline. -server Generates starting point code for a server mainline. -impl Generates starting point code for an implementation object. -all Generates all starting point code: types, service proxy, service interface, server mainline, client mainline, implementation object, and an Ant build.xml file. -ant Generates the Ant build.xml file. -autoNameResolution Automatically resolve naming conflicts without requiring the use of binding customizations. -defaultValues = DefaultValueProvider Instructs the tool to generate default values for the generated client and the generated implementation. Optionally, you can also supply the name of the class used to generate the default values. By default, the RandomValueProvider class is used. -nexclude schema-namespace = java-packagename Ignore the specified WSDL schema namespace when generating code. This option may be specified multiple times. Also, optionally specifies the Java package name used by types described in the excluded namespace(s). -exsh ( true / false ) Enables or disables processing of extended soap header message binding. Default is false. -noTypes Turns off generating types. -dns (true/false) Enables or disables the loading of the default namespace package name mapping. Default is true. -dex (true/false) Enables or disables the loading of the default excludes namespace mapping. Default is true. -xjc args Specifies a comma separated list of arguments to be passed to directly to the XJC when the JAXB data binding is being used. To get a list of all possible XJC arguments use the -xjc-X . -noAddressBinding Instructs the tool to use the Apache CXF proprietary WS-Addressing type instead of the JAX-WS 2.1 compliant mapping. -validate [=all|basic|none] Instructs the tool to validate the WSDL document before attempting to generate any code. -keep Instructs the tool to not overwrite any existing files. -wsdlLocation wsdlLocation Specifies the value of the @WebService annotation's wsdlLocation property. -v Displays the version number for the tool. -verbose|-V Displays comments during the code generation process. -quiet Suppresses comments during the code generation process. -allowElementReferences[=true] , -aer[=true] If true , disregards the rule given in section 2.3.1.2(v) of the JAX-WS 2.2 specification disallowing element references when using wrapper-style mapping. Default is false . -asyncMethods[= method1 , method2 ,... ] List of subsequently generated Java class methods to allow for client-side asynchronous calls; similar to enableAsyncMapping in a JAX-WS binding file. -bareMethods[= method1 , method2 ,... ] List of subsequently generated Java class methods to have wrapper style (see below), similar to enableWrapperStyle in JAX-WS binding file. -mimeMethods[= method1 , method2 ,... ] List of subsequently generated Java class methods to enable mime:content mapping, similar to enableMIMEContent in JAX-WS binding file. -faultSerialVersionUID fault-serialVersionUID How to generate suid of fault exceptions. Possible values are: NONE , TIMESTAMP , FQCN , or a specific number. Default is NONE . -encoding encoding Specifies the Charset encoding to use when generating Java code. -exceptionSuper Superclass for fault beans generated from wsdl:fault elements (defaults to java.lang.Exception ). -seiSuper interfaceName Specifies a base interface for the generated SEI interfaces. For example, this option can be used to add the Java 7 AutoCloseable interface as a super interface. -mark-generated Adds the @Generated annotation to the generated classes. [a] Currently, Apache CXF only provides WSDL 1.1 support for the code generator. 44.3. java2ws Abstract generates a WSDL document from Java code Synopsis Description The java2ws task takes a service endpoint implementation (SEI) and generates the support files used to implement a Web service. It can generate the following: a WSDL document the server code needed to deploy the service as a POJO client code for accessing the service wrapper and fault beans Required configuration The plug-in requires that the className configuration element is present. The element's value is the fully qualified name of the SEI to be processed. Optional configuration The configuration element's listed in the following table can be used to fine tune the WSDL generation. Element Description frontend Specifies front end to use for processing the SEI and generating the support classes. jaxws is the default. simple is also supported. databinding Specifies the data binding used for processing the SEI and generating the support classes. The default when using the JAX-WS front end is jaxb . The default when using the simple frontend is aegis . genWsdl Instructs the tool to generate a WSDL document when set to true . genWrapperbean Instructs the tool to generate the wrapper bean and the fault beans when set to true . genClient Instructs the tool to generate client code when set to true . genServer Instructs the tool to generate server code when set to true . outputFile Specifies the name of the generated WSDL file. classpath Specifies the classpath searched when processing the SEI. soap12 Specifies that the generated WSDL document is to include a SOAP 1.2 binding when set to true . targetNamespace Specifies the target namespace to use in the generated WSDL file. serviceName Specifies the value of the generated service element's name attribute.
[ "<dependency> <groupId>org.apache.cxf</groupId> <artifactId>cxf-rt-frontend-jaxws</artifactId> <version> version </version> </dependency>", "<dependency> <groupId>org.apache.cxf</groupId> <artifactId>cxf-rt-transports-http</artifactId> <version> version </version> </dependency>", "<dependency> <groupId>org.apache.cxf</groupId> <artifactId>cxf-rt-transports-http-undertow</artifactId> <version> version </version> </dependency>", "<plugin> <groupId>org.apache.cxf</groupId> <artifactId>cxf-codegen-plugin</artifactId> <version>3.3.6.fuse-7_13_0-00015-redhat-00001</version> <executions> <execution> <id>generate-sources</id> <phase>generate-sources</phase> <configuration> <sourceRoot>target/generated/src/main/java</sourceRoot> <wsdlOptions> <wsdlOption> <wsdl>src/main/resources/wsdl/myService.wsdl</wsdl> </wsdlOption> </wsdlOptions> </configuration> <goals> <goal>wsdl2java</goal> </goals> </execution> </executions> </plugin>", "<configuration> <sourceRoot>target/generated/src/main/java</sourceRoot> <wsdlOptions> <wsdlOption> <wsdl>USD{basedir}/src/main/resources/wsdl/myService.wsdl</wsdl> <!-- you can set the options of wsdl2java command by using the <extraargs> --> <extraargs> <extraarg>-impl</extraarg> <extraarg>-verbose</extraarg> </extraargs> </wsdlOption> </wsdlOptions> </configuration>", "<configuration> <sourceRoot>target/generated/src/main/java</sourceRoot> <wsdlOptions> <wsdlOption> <wsdl>USD{basedir}/src/main/resources/wsdl/myService.wsdl</wsdl> <extraargs> <extraarg>-databinding</extraarg> <extraarg>jibx</extraarg> </extraargs> </wsdlOption> </wsdlOptions> </configuration>", "<configuration> <wsdlOptions> <wsdlOption> <wsdl>USD{basedir}/src/main/resources/wsdl/myService.wsdl</wsdl> <bindingFiles> <bindingFile>USD{basedir}/src/main/resources/wsdl/async_binding.xml</bindingFile> </bindingFiles> </wsdlOption> </wsdlOptions> </configuration>", "<configuration> <wsdlOptions> <wsdlOption> <wsdl>USD{basedir}/src/main/resources/wsdl/myService.wsdl</wsdl> <serviceName>MyWSDLService</serviceName> </wsdlOption> </wsdlOptions> </configuration>", "<configuration> <defaultOptions> <bindingFiles> <bindingFile>USD{basedir}/src/main/jaxb/bindings.xml</bindingFile> </bindingFiles> <noAddressBinding>true</noAddressBinding> </defaultOptions> <wsdlOptions> <wsdlOption> <wsdl>USD{basedir}/src/main/resources/wsdl/myService.wsdl</wsdl> <serviceName>MyWSDLService</serviceName> </wsdlOption> <wsdlOption> <wsdl>USD{basedir}/src/main/resources/wsdl/myOtherService.wsdl</wsdl> <serviceName>MyOtherWSDLService</serviceName> </wsdlOption> </wsdlOptions> </configuration>", "<configuration> <defaultOptions> <bindingFiles> <bindingFile>USD{basedir}/src/main/jaxb/bindings.xml</bindingFile> </bindingFiles> <noAddressBinding>true</noAddressBinding> </defaultOptions> <wsdlRoot>USD{basedir}/src/main/resources/wsdl</wsdlRoot> <includes> <include>*Service.wsdl</include> </includes> </configuration>", "<configuration> <wsdlOptions> <wsdlOption> <wsdlArtifact> <groupId>org.apache.pizza</groupId> <artifactId>PizzaService</artifactId> <version>1.0.0</version> </wsdlArtifact> </wsdlOption> </wsdlOptions> </configuration>", "<configuration> <wsdlOptions> <wsdlOption> <wsdl>USD{basedir}/src/main/resources/wsdl/myService.wsdl</wsdl> </wsdlOption> </wsdlOptions> <encoding>UTF-8</encoding> </configuration>", "<configuration> <wsdlOptions> <wsdlOption> <wsdl>USD{basedir}/src/main/resources/wsdl/myService.wsdl</wsdl> </wsdlOption> </wsdlOptions> <fork>once</fork> <additionalJvmArgs>-Djavax.xml.accessExternalSchema=jar:file,file</additionalJvmArgs> </configuration>", "<plugin> <groupId>org.apache.cxf</groupId> <artifactId>cxf-java2ws-plugin</artifactId> <version> version </version> <executions> <execution> <id>process-classes</id> <phase>process-classes</phase> <configuration> <className> className </className> <option>...</option> </configuration> <goals> <goal>java2ws</goal> </goals> </execution> </executions> </plugin>" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_cxf_development_guide/JAXWSMVNTooling
Chapter 17. External DNS Operator
Chapter 17. External DNS Operator 17.1. External DNS Operator in OpenShift Container Platform The External DNS Operator deploys and manages ExternalDNS to provide the name resolution for services and routes from the external DNS provider to OpenShift Container Platform. 17.1.1. External DNS Operator The External DNS Operator implements the External DNS API from the olm.openshift.io API group. The External DNS Operator updates services, routes, and external DNS providers. Prerequisites You have installed the yq CLI tool. Procedure You can deploy the External DNS Operator on demand from the OperatorHub. Deploying the External DNS Operator creates a Subscription object. Check the name of an install plan by running the following command: USD oc -n external-dns-operator get sub external-dns-operator -o yaml | yq '.status.installplan.name' Example output install-zcvlr Check if the status of an install plan is Complete by running the following command: USD oc -n external-dns-operator get ip <install_plan_name> -o yaml | yq '.status.phase' Example output Complete View the status of the external-dns-operator deployment by running the following command: USD oc get -n external-dns-operator deployment/external-dns-operator Example output NAME READY UP-TO-DATE AVAILABLE AGE external-dns-operator 1/1 1 1 23h 17.1.2. External DNS Operator logs You can view External DNS Operator logs by using the oc logs command. Procedure View the logs of the External DNS Operator by running the following command: USD oc logs -n external-dns-operator deployment/external-dns-operator -c external-dns-operator 17.1.2.1. External DNS Operator domain name limitations The External DNS Operator uses the TXT registry which adds the prefix for TXT records. This reduces the maximum length of the domain name for TXT records. A DNS record cannot be present without a corresponding TXT record, so the domain name of the DNS record must follow the same limit as the TXT records. For example, a DNS record of <domain_name_from_source> results in a TXT record of external-dns-<record_type>-<domain_name_from_source> . The domain name of the DNS records generated by the External DNS Operator has the following limitations: Record type Number of characters CNAME 44 Wildcard CNAME records on AzureDNS 42 A 48 Wildcard A records on AzureDNS 46 The following error appears in the External DNS Operator logs if the generated domain name exceeds any of the domain name limitations: time="2022-09-02T08:53:57Z" level=error msg="Failure in zone test.example.io. [Id: /hostedzone/Z06988883Q0H0RL6UMXXX]" time="2022-09-02T08:53:57Z" level=error msg="InvalidChangeBatch: [FATAL problem: DomainLabelTooLong (Domain label is too long) encountered with 'external-dns-a-hello-openshift-aaaaaaaaaa-bbbbbbbbbb-ccccccc']\n\tstatus code: 400, request id: e54dfd5a-06c6-47b0-bcb9-a4f7c3a4e0c6" 17.2. Installing External DNS Operator on cloud providers You can install the External DNS Operator on cloud providers such as AWS, Azure, and GCP. 17.2.1. Installing the External DNS Operator with OperatorHub You can install the External DNS Operator by using the OpenShift Container Platform OperatorHub. Procedure Click Operators OperatorHub in the OpenShift Container Platform web console. Click External DNS Operator . You can use the Filter by keyword text box or the filter list to search for External DNS Operator from the list of Operators. Select the external-dns-operator namespace. On the External DNS Operator page, click Install . On the Install Operator page, ensure that you selected the following options: Update the channel as stable-v1 . Installation mode as A specific name on the cluster . Installed namespace as external-dns-operator . If namespace external-dns-operator does not exist, it gets created during the Operator installation. Select Approval Strategy as Automatic or Manual . Approval Strategy is set to Automatic by default. Click Install . If you select Automatic updates, the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to have the Operator updated to the new version. Verification Verify that the External DNS Operator shows the Status as Succeeded on the Installed Operators dashboard. 17.2.2. Installing the External DNS Operator by using the CLI You can install the External DNS Operator by using the CLI. Prerequisites You are logged in to the OpenShift Container Platform web console as a user with cluster-admin permissions. You are logged into the OpenShift CLI ( oc ). Procedure Create a Namespace object: Create a YAML file that defines the Namespace object: Example namespace.yaml file apiVersion: v1 kind: Namespace metadata: name: external-dns-operator Create the Namespace object by running the following command: USD oc apply -f namespace.yaml Create an OperatorGroup object: Create a YAML file that defines the OperatorGroup object: Example operatorgroup.yaml file apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: external-dns-operator namespace: external-dns-operator spec: upgradeStrategy: Default targetNamespaces: - external-dns-operator Create the OperatorGroup object by running the following command: USD oc apply -f operatorgroup.yaml Create a Subscription object: Create a YAML file that defines the Subscription object: Example subscription.yaml file apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: external-dns-operator namespace: external-dns-operator spec: channel: stable-v1 installPlanApproval: Automatic name: external-dns-operator source: redhat-operators sourceNamespace: openshift-marketplace Create the Subscription object by running the following command: USD oc apply -f subscription.yaml Verification Get the name of the install plan from the subscription by running the following command: USD oc -n external-dns-operator \ get subscription external-dns-operator \ --template='{{.status.installplan.name}}{{"\n"}}' Verify that the status of the install plan is Complete by running the following command: USD oc -n external-dns-operator \ get ip <install_plan_name> \ --template='{{.status.phase}}{{"\n"}}' Verify that the status of the external-dns-operator pod is Running by running the following command: USD oc -n external-dns-operator get pod Example output NAME READY STATUS RESTARTS AGE external-dns-operator-5584585fd7-5lwqm 2/2 Running 0 11m Verify that the catalog source of the subscription is redhat-operators by running the following command: USD oc -n external-dns-operator get subscription Example output NAME PACKAGE SOURCE CHANNEL external-dns-operator external-dns-operator redhat-operators stable-v1 Check the external-dns-operator version by running the following command: USD oc -n external-dns-operator get csv Example output NAME DISPLAY VERSION REPLACES PHASE external-dns-operator.v<1.y.z> ExternalDNS Operator <1.y.z> Succeeded 17.3. External DNS Operator configuration parameters The External DNS Operator includes the following configuration parameters. 17.3.1. External DNS Operator configuration parameters The External DNS Operator includes the following configuration parameters: Parameter Description spec Enables the type of a cloud provider. spec: provider: type: AWS 1 aws: credentials: name: aws-access-key 2 1 Defines available options such as AWS, GCP, Azure, and Infoblox. 2 Defines a secret name for your cloud provider. zones Enables you to specify DNS zones by their domains. If you do not specify zones, the ExternalDNS resource discovers all of the zones present in your cloud provider account. zones: - "myzoneid" 1 1 Specifies the name of DNS zones. domains Enables you to specify AWS zones by their domains. If you do not specify domains, the ExternalDNS resource discovers all of the zones present in your cloud provider account. domains: - filterType: Include 1 matchType: Exact 2 name: "myzonedomain1.com" 3 - filterType: Include matchType: Pattern 4 pattern: ".*\\.otherzonedomain\\.com" 5 1 Ensures that the ExternalDNS resource includes the domain name. 2 Instructs ExternalDNS that the domain matching has to be exact as opposed to regular expression match. 3 Defines the name of the domain. 4 Sets the regex-domain-filter flag in the ExternalDNS resource. You can limit possible domains by using a Regex filter. 5 Defines the regex pattern to be used by the ExternalDNS resource to filter the domains of the target zones. source Enables you to specify the source for the DNS records, Service or Route . source: 1 type: Service 2 service: serviceType: 3 - LoadBalancer - ClusterIP labelFilter: 4 matchLabels: external-dns.mydomain.org/publish: "yes" hostnameAnnotation: "Allow" 5 fqdnTemplate: - "{{.Name}}.myzonedomain.com" 6 1 Defines the settings for the source of DNS records. 2 The ExternalDNS resource uses the Service type as the source for creating DNS records. 3 Sets the service-type-filter flag in the ExternalDNS resource. The serviceType contains the following fields: default : LoadBalancer expected : ClusterIP NodePort LoadBalancer ExternalName 4 Ensures that the controller considers only those resources which matches with label filter. 5 The default value for hostnameAnnotation is Ignore which instructs ExternalDNS to generate DNS records using the templates specified in the field fqdnTemplates . When the value is Allow the DNS records get generated based on the value specified in the external-dns.alpha.kubernetes.io/hostname annotation. 6 The External DNS Operator uses a string to generate DNS names from sources that don't define a hostname, or to add a hostname suffix when paired with the fake source. source: type: OpenShiftRoute 1 openshiftRouteOptions: routerName: default 2 labelFilter: matchLabels: external-dns.mydomain.org/publish: "yes" 1 Creates DNS records. 2 If the source type is OpenShiftRoute , then you can pass the Ingress Controller name. The ExternalDNS resource uses the canonical name of the Ingress Controller as the target for CNAME records. 17.4. Creating DNS records on AWS You can create DNS records on AWS and AWS GovCloud by using External DNS Operator. 17.4.1. Creating DNS records on an public hosted zone for AWS by using Red Hat External DNS Operator You can create DNS records on a public hosted zone for AWS by using the Red Hat External DNS Operator. You can use the same instructions to create DNS records on a hosted zone for AWS GovCloud. Procedure Check the user. The user must have access to the kube-system namespace. If you don't have the credentials, as you can fetch the credentials from the kube-system namespace to use the cloud provider client: USD oc whoami Example output system:admin Fetch the values from aws-creds secret present in kube-system namespace. USD export AWS_ACCESS_KEY_ID=USD(oc get secrets aws-creds -n kube-system --template={{.data.aws_access_key_id}} | base64 -d) USD export AWS_SECRET_ACCESS_KEY=USD(oc get secrets aws-creds -n kube-system --template={{.data.aws_secret_access_key}} | base64 -d) Get the routes to check the domain: USD oc get routes --all-namespaces | grep console Example output openshift-console console console-openshift-console.apps.testextdnsoperator.apacshift.support console https reencrypt/Redirect None openshift-console downloads downloads-openshift-console.apps.testextdnsoperator.apacshift.support downloads http edge/Redirect None Get the list of dns zones to find the one which corresponds to the previously found route's domain: USD aws route53 list-hosted-zones | grep testextdnsoperator.apacshift.support Example output HOSTEDZONES terraform /hostedzone/Z02355203TNN1XXXX1J6O testextdnsoperator.apacshift.support. 5 Create ExternalDNS resource for route source: USD cat <<EOF | oc create -f - apiVersion: externaldns.olm.openshift.io/v1beta1 kind: ExternalDNS metadata: name: sample-aws 1 spec: domains: - filterType: Include 2 matchType: Exact 3 name: testextdnsoperator.apacshift.support 4 provider: type: AWS 5 source: 6 type: OpenShiftRoute 7 openshiftRouteOptions: routerName: default 8 EOF 1 Defines the name of external DNS resource. 2 By default all hosted zones are selected as potential targets. You can include a hosted zone that you need. 3 The matching of the target zone's domain has to be exact (as opposed to regular expression match). 4 Specify the exact domain of the zone you want to update. The hostname of the routes must be subdomains of the specified domain. 5 Defines the AWS Route53 DNS provider. 6 Defines options for the source of DNS records. 7 Defines OpenShift route resource as the source for the DNS records which gets created in the previously specified DNS provider. 8 If the source is OpenShiftRoute , then you can pass the OpenShift Ingress Controller name. External DNS Operator selects the canonical hostname of that router as the target while creating CNAME record. Check the records created for OCP routes using the following command: USD aws route53 list-resource-record-sets --hosted-zone-id Z02355203TNN1XXXX1J6O --query "ResourceRecordSets[?Type == 'CNAME']" | grep console 17.5. Creating DNS records on Azure You can create DNS records on Azure by using the External DNS Operator. Important Using the External DNS Operator on a {entra-first}-enabled cluster or a cluster that runs in Microsoft Azure Government (MAG) regions is not supported. 17.5.1. Creating DNS records on an Azure public DNS zone You can create DNS records on a public DNS zone for Azure by using the External DNS Operator. Prerequisites You must have administrator privileges. The admin user must have access to the kube-system namespace. Procedure Fetch the credentials from the kube-system namespace to use the cloud provider client by running the following command: USD CLIENT_ID=USD(oc get secrets azure-credentials -n kube-system --template={{.data.azure_client_id}} | base64 -d) USD CLIENT_SECRET=USD(oc get secrets azure-credentials -n kube-system --template={{.data.azure_client_secret}} | base64 -d) USD RESOURCE_GROUP=USD(oc get secrets azure-credentials -n kube-system --template={{.data.azure_resourcegroup}} | base64 -d) USD SUBSCRIPTION_ID=USD(oc get secrets azure-credentials -n kube-system --template={{.data.azure_subscription_id}} | base64 -d) USD TENANT_ID=USD(oc get secrets azure-credentials -n kube-system --template={{.data.azure_tenant_id}} | base64 -d) Log in to Azure by running the following command: USD az login --service-principal -u "USD{CLIENT_ID}" -p "USD{CLIENT_SECRET}" --tenant "USD{TENANT_ID}" Get a list of routes by running the following command: USD oc get routes --all-namespaces | grep console Example output openshift-console console console-openshift-console.apps.test.azure.example.com console https reencrypt/Redirect None openshift-console downloads downloads-openshift-console.apps.test.azure.example.com downloads http edge/Redirect None Get a list of DNS zones by running the following command: USD az network dns zone list --resource-group "USD{RESOURCE_GROUP}" Create a YAML file, for example, external-dns-sample-azure.yaml , that defines the ExternalDNS object: Example external-dns-sample-azure.yaml file apiVersion: externaldns.olm.openshift.io/v1beta1 kind: ExternalDNS metadata: name: sample-azure 1 spec: zones: - "/subscriptions/1234567890/resourceGroups/test-azure-xxxxx-rg/providers/Microsoft.Network/dnszones/test.azure.example.com" 2 provider: type: Azure 3 source: openshiftRouteOptions: 4 routerName: default 5 type: OpenShiftRoute 6 1 Specifies the External DNS name. 2 Defines the zone ID. 3 Defines the provider type. 4 You can define options for the source of DNS records. 5 If the source type is OpenShiftRoute , you can pass the OpenShift Ingress Controller name. External DNS selects the canonical hostname of that router as the target while creating CNAME record. 6 Defines the route resource as the source for the Azure DNS records. Check the DNS records created for OpenShift Container Platform routes by running the following command: USD az network dns record-set list -g "USD{RESOURCE_GROUP}" -z test.azure.example.com | grep console Note To create records on private hosted zones on private Azure DNS, you need to specify the private zone under the zones field which populates the provider type to azure-private-dns in the ExternalDNS container arguments. 17.6. Creating DNS records on GCP You can create DNS records on Google Cloud Platform (GCP) by using the External DNS Operator. Important Using the External DNS Operator on a cluster with GCP Workload Identity enabled is not supported. For more information about the GCP Workload Identity, see Using manual mode with GCP Workload Identity . 17.6.1. Creating DNS records on a public managed zone for GCP You can create DNS records on a public managed zone for GCP by using the External DNS Operator. Prerequisites You must have administrator privileges. Procedure Copy the gcp-credentials secret in the encoded-gcloud.json file by running the following command: USD oc get secret gcp-credentials -n kube-system --template='{{USDv := index .data "service_account.json"}}{{USDv}}' | base64 -d - > decoded-gcloud.json Export your Google credentials by running the following command: USD export GOOGLE_CREDENTIALS=decoded-gcloud.json Activate your account by using the following command: USD gcloud auth activate-service-account <client_email as per decoded-gcloud.json> --key-file=decoded-gcloud.json Set your project by running the following command: USD gcloud config set project <project_id as per decoded-gcloud.json> Get a list of routes by running the following command: USD oc get routes --all-namespaces | grep console Example output openshift-console console console-openshift-console.apps.test.gcp.example.com console https reencrypt/Redirect None openshift-console downloads downloads-openshift-console.apps.test.gcp.example.com downloads http edge/Redirect None Get a list of managed zones by running the following command: USD gcloud dns managed-zones list | grep test.gcp.example.com Example output qe-cvs4g-private-zone test.gcp.example.com Create a YAML file, for example, external-dns-sample-gcp.yaml , that defines the ExternalDNS object: Example external-dns-sample-gcp.yaml file apiVersion: externaldns.olm.openshift.io/v1beta1 kind: ExternalDNS metadata: name: sample-gcp 1 spec: domains: - filterType: Include 2 matchType: Exact 3 name: test.gcp.example.com 4 provider: type: GCP 5 source: openshiftRouteOptions: 6 routerName: default 7 type: OpenShiftRoute 8 1 Specifies the External DNS name. 2 By default, all hosted zones are selected as potential targets. You can include your hosted zone. 3 The domain of the target must match the string defined by the name key. 4 Specify the exact domain of the zone you want to update. The hostname of the routes must be subdomains of the specified domain. 5 Defines the provider type. 6 You can define options for the source of DNS records. 7 If the source type is OpenShiftRoute , you can pass the OpenShift Ingress Controller name. External DNS selects the canonical hostname of that router as the target while creating CNAME record. 8 Defines the route resource as the source for GCP DNS records. Check the DNS records created for OpenShift Container Platform routes by running the following command: USD gcloud dns record-sets list --zone=qe-cvs4g-private-zone | grep console 17.7. Creating DNS records on Infoblox You can create DNS records on Infoblox by using the External DNS Operator. 17.7.1. Creating DNS records on a public DNS zone on Infoblox You can create DNS records on a public DNS zone on Infoblox by using the External DNS Operator. Prerequisites You have access to the OpenShift CLI ( oc ). You have access to the Infoblox UI. Procedure Create a secret object with Infoblox credentials by running the following command: USD oc -n external-dns-operator create secret generic infoblox-credentials --from-literal=EXTERNAL_DNS_INFOBLOX_WAPI_USERNAME=<infoblox_username> --from-literal=EXTERNAL_DNS_INFOBLOX_WAPI_PASSWORD=<infoblox_password> Get a list of routes by running the following command: USD oc get routes --all-namespaces | grep console Example Output openshift-console console console-openshift-console.apps.test.example.com console https reencrypt/Redirect None openshift-console downloads downloads-openshift-console.apps.test.example.com downloads http edge/Redirect None Create a YAML file, for example, external-dns-sample-infoblox.yaml , that defines the ExternalDNS object: Example external-dns-sample-infoblox.yaml file apiVersion: externaldns.olm.openshift.io/v1beta1 kind: ExternalDNS metadata: name: sample-infoblox 1 spec: provider: type: Infoblox 2 infoblox: credentials: name: infoblox-credentials gridHost: USD{INFOBLOX_GRID_PUBLIC_IP} wapiPort: 443 wapiVersion: "2.3.1" domains: - filterType: Include matchType: Exact name: test.example.com source: type: OpenShiftRoute 3 openshiftRouteOptions: routerName: default 4 1 Specifies the External DNS name. 2 Defines the provider type. 3 You can define options for the source of DNS records. 4 If the source type is OpenShiftRoute , you can pass the OpenShift Ingress Controller name. External DNS selects the canonical hostname of that router as the target while creating CNAME record. Create the ExternalDNS resource on Infoblox by running the following command: USD oc create -f external-dns-sample-infoblox.yaml From the Infoblox UI, check the DNS records created for console routes: Click Data Management DNS Zones . Select the zone name. 17.8. Configuring the cluster-wide proxy on the External DNS Operator After configuring the cluster-wide proxy, the Operator Lifecycle Manager (OLM) triggers automatic updates to all of the deployed Operators with the new contents of the HTTP_PROXY , HTTPS_PROXY , and NO_PROXY environment variables. 17.8.1. Trusting the certificate authority of the cluster-wide proxy You can configure the External DNS Operator to trust the certificate authority of the cluster-wide proxy. Procedure Create the config map to contain the CA bundle in the external-dns-operator namespace by running the following command: USD oc -n external-dns-operator create configmap trusted-ca To inject the trusted CA bundle into the config map, add the config.openshift.io/inject-trusted-cabundle=true label to the config map by running the following command: USD oc -n external-dns-operator label cm trusted-ca config.openshift.io/inject-trusted-cabundle=true Update the subscription of the External DNS Operator by running the following command: USD oc -n external-dns-operator patch subscription external-dns-operator --type='json' -p='[{"op": "add", "path": "/spec/config", "value":{"env":[{"name":"TRUSTED_CA_CONFIGMAP_NAME","value":"trusted-ca"}]}}]' Verification After the deployment of the External DNS Operator is completed, verify that the trusted CA environment variable is added to the external-dns-operator deployment by running the following command: USD oc -n external-dns-operator exec deploy/external-dns-operator -c external-dns-operator -- printenv TRUSTED_CA_CONFIGMAP_NAME Example output trusted-ca
[ "oc -n external-dns-operator get sub external-dns-operator -o yaml | yq '.status.installplan.name'", "install-zcvlr", "oc -n external-dns-operator get ip <install_plan_name> -o yaml | yq '.status.phase'", "Complete", "oc get -n external-dns-operator deployment/external-dns-operator", "NAME READY UP-TO-DATE AVAILABLE AGE external-dns-operator 1/1 1 1 23h", "oc logs -n external-dns-operator deployment/external-dns-operator -c external-dns-operator", "time=\"2022-09-02T08:53:57Z\" level=error msg=\"Failure in zone test.example.io. [Id: /hostedzone/Z06988883Q0H0RL6UMXXX]\" time=\"2022-09-02T08:53:57Z\" level=error msg=\"InvalidChangeBatch: [FATAL problem: DomainLabelTooLong (Domain label is too long) encountered with 'external-dns-a-hello-openshift-aaaaaaaaaa-bbbbbbbbbb-ccccccc']\\n\\tstatus code: 400, request id: e54dfd5a-06c6-47b0-bcb9-a4f7c3a4e0c6\"", "apiVersion: v1 kind: Namespace metadata: name: external-dns-operator", "oc apply -f namespace.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: external-dns-operator namespace: external-dns-operator spec: upgradeStrategy: Default targetNamespaces: - external-dns-operator", "oc apply -f operatorgroup.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: external-dns-operator namespace: external-dns-operator spec: channel: stable-v1 installPlanApproval: Automatic name: external-dns-operator source: redhat-operators sourceNamespace: openshift-marketplace", "oc apply -f subscription.yaml", "oc -n external-dns-operator get subscription external-dns-operator --template='{{.status.installplan.name}}{{\"\\n\"}}'", "oc -n external-dns-operator get ip <install_plan_name> --template='{{.status.phase}}{{\"\\n\"}}'", "oc -n external-dns-operator get pod", "NAME READY STATUS RESTARTS AGE external-dns-operator-5584585fd7-5lwqm 2/2 Running 0 11m", "oc -n external-dns-operator get subscription", "NAME PACKAGE SOURCE CHANNEL external-dns-operator external-dns-operator redhat-operators stable-v1", "oc -n external-dns-operator get csv", "NAME DISPLAY VERSION REPLACES PHASE external-dns-operator.v<1.y.z> ExternalDNS Operator <1.y.z> Succeeded", "spec: provider: type: AWS 1 aws: credentials: name: aws-access-key 2", "zones: - \"myzoneid\" 1", "domains: - filterType: Include 1 matchType: Exact 2 name: \"myzonedomain1.com\" 3 - filterType: Include matchType: Pattern 4 pattern: \".*\\\\.otherzonedomain\\\\.com\" 5", "source: 1 type: Service 2 service: serviceType: 3 - LoadBalancer - ClusterIP labelFilter: 4 matchLabels: external-dns.mydomain.org/publish: \"yes\" hostnameAnnotation: \"Allow\" 5 fqdnTemplate: - \"{{.Name}}.myzonedomain.com\" 6", "source: type: OpenShiftRoute 1 openshiftRouteOptions: routerName: default 2 labelFilter: matchLabels: external-dns.mydomain.org/publish: \"yes\"", "oc whoami", "system:admin", "export AWS_ACCESS_KEY_ID=USD(oc get secrets aws-creds -n kube-system --template={{.data.aws_access_key_id}} | base64 -d) export AWS_SECRET_ACCESS_KEY=USD(oc get secrets aws-creds -n kube-system --template={{.data.aws_secret_access_key}} | base64 -d)", "oc get routes --all-namespaces | grep console", "openshift-console console console-openshift-console.apps.testextdnsoperator.apacshift.support console https reencrypt/Redirect None openshift-console downloads downloads-openshift-console.apps.testextdnsoperator.apacshift.support downloads http edge/Redirect None", "aws route53 list-hosted-zones | grep testextdnsoperator.apacshift.support", "HOSTEDZONES terraform /hostedzone/Z02355203TNN1XXXX1J6O testextdnsoperator.apacshift.support. 5", "cat <<EOF | oc create -f - apiVersion: externaldns.olm.openshift.io/v1beta1 kind: ExternalDNS metadata: name: sample-aws 1 spec: domains: - filterType: Include 2 matchType: Exact 3 name: testextdnsoperator.apacshift.support 4 provider: type: AWS 5 source: 6 type: OpenShiftRoute 7 openshiftRouteOptions: routerName: default 8 EOF", "aws route53 list-resource-record-sets --hosted-zone-id Z02355203TNN1XXXX1J6O --query \"ResourceRecordSets[?Type == 'CNAME']\" | grep console", "CLIENT_ID=USD(oc get secrets azure-credentials -n kube-system --template={{.data.azure_client_id}} | base64 -d) CLIENT_SECRET=USD(oc get secrets azure-credentials -n kube-system --template={{.data.azure_client_secret}} | base64 -d) RESOURCE_GROUP=USD(oc get secrets azure-credentials -n kube-system --template={{.data.azure_resourcegroup}} | base64 -d) SUBSCRIPTION_ID=USD(oc get secrets azure-credentials -n kube-system --template={{.data.azure_subscription_id}} | base64 -d) TENANT_ID=USD(oc get secrets azure-credentials -n kube-system --template={{.data.azure_tenant_id}} | base64 -d)", "az login --service-principal -u \"USD{CLIENT_ID}\" -p \"USD{CLIENT_SECRET}\" --tenant \"USD{TENANT_ID}\"", "oc get routes --all-namespaces | grep console", "openshift-console console console-openshift-console.apps.test.azure.example.com console https reencrypt/Redirect None openshift-console downloads downloads-openshift-console.apps.test.azure.example.com downloads http edge/Redirect None", "az network dns zone list --resource-group \"USD{RESOURCE_GROUP}\"", "apiVersion: externaldns.olm.openshift.io/v1beta1 kind: ExternalDNS metadata: name: sample-azure 1 spec: zones: - \"/subscriptions/1234567890/resourceGroups/test-azure-xxxxx-rg/providers/Microsoft.Network/dnszones/test.azure.example.com\" 2 provider: type: Azure 3 source: openshiftRouteOptions: 4 routerName: default 5 type: OpenShiftRoute 6", "az network dns record-set list -g \"USD{RESOURCE_GROUP}\" -z test.azure.example.com | grep console", "oc get secret gcp-credentials -n kube-system --template='{{USDv := index .data \"service_account.json\"}}{{USDv}}' | base64 -d - > decoded-gcloud.json", "export GOOGLE_CREDENTIALS=decoded-gcloud.json", "gcloud auth activate-service-account <client_email as per decoded-gcloud.json> --key-file=decoded-gcloud.json", "gcloud config set project <project_id as per decoded-gcloud.json>", "oc get routes --all-namespaces | grep console", "openshift-console console console-openshift-console.apps.test.gcp.example.com console https reencrypt/Redirect None openshift-console downloads downloads-openshift-console.apps.test.gcp.example.com downloads http edge/Redirect None", "gcloud dns managed-zones list | grep test.gcp.example.com", "qe-cvs4g-private-zone test.gcp.example.com", "apiVersion: externaldns.olm.openshift.io/v1beta1 kind: ExternalDNS metadata: name: sample-gcp 1 spec: domains: - filterType: Include 2 matchType: Exact 3 name: test.gcp.example.com 4 provider: type: GCP 5 source: openshiftRouteOptions: 6 routerName: default 7 type: OpenShiftRoute 8", "gcloud dns record-sets list --zone=qe-cvs4g-private-zone | grep console", "oc -n external-dns-operator create secret generic infoblox-credentials --from-literal=EXTERNAL_DNS_INFOBLOX_WAPI_USERNAME=<infoblox_username> --from-literal=EXTERNAL_DNS_INFOBLOX_WAPI_PASSWORD=<infoblox_password>", "oc get routes --all-namespaces | grep console", "openshift-console console console-openshift-console.apps.test.example.com console https reencrypt/Redirect None openshift-console downloads downloads-openshift-console.apps.test.example.com downloads http edge/Redirect None", "apiVersion: externaldns.olm.openshift.io/v1beta1 kind: ExternalDNS metadata: name: sample-infoblox 1 spec: provider: type: Infoblox 2 infoblox: credentials: name: infoblox-credentials gridHost: USD{INFOBLOX_GRID_PUBLIC_IP} wapiPort: 443 wapiVersion: \"2.3.1\" domains: - filterType: Include matchType: Exact name: test.example.com source: type: OpenShiftRoute 3 openshiftRouteOptions: routerName: default 4", "oc create -f external-dns-sample-infoblox.yaml", "oc -n external-dns-operator create configmap trusted-ca", "oc -n external-dns-operator label cm trusted-ca config.openshift.io/inject-trusted-cabundle=true", "oc -n external-dns-operator patch subscription external-dns-operator --type='json' -p='[{\"op\": \"add\", \"path\": \"/spec/config\", \"value\":{\"env\":[{\"name\":\"TRUSTED_CA_CONFIGMAP_NAME\",\"value\":\"trusted-ca\"}]}}]'", "oc -n external-dns-operator exec deploy/external-dns-operator -c external-dns-operator -- printenv TRUSTED_CA_CONFIGMAP_NAME", "trusted-ca" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/networking/external-dns-operator-1
Chapter 3. Analyzing your projects with the MTA plugin
Chapter 3. Analyzing your projects with the MTA plugin You can analyze your projects with the MTA plugin by creating a run configuration, running an analysis, and then reviewing and resolving migration issues detected by the MTA plugin. 3.1. Creating a run configuration You can create a run configuration in the Issue Explorer . A run configuration specifies the project to analyze, migration path, and additional options. You can create multiple run configurations. Each run configuration must have a unique name. Prerequisite You must import your projects into the Eclipse IDE. Procedure In the Issue Explorer , click the MTA icon ( ) to create a run configuration. On the Input tab, complete the following fields: Select a migration path. Beside the Projects field, click Add and select one or more projects. Beside the Packages field, click Add and select one or more the packages. Note Specifying the packages for analysis reduces the run time. If you do not select any packages, all packages in the project are scanned. On the Options tab, you can select Generate Report to generate an HTML report. The report is displayed in the Report tab and saved as a file. Other options are displayed. See About MTA command-line arguments in the CLI Guide for details. On the Rules tab, you can select custom rulesets that you have imported or created for the MTA plugin. Click Run to start the analysis. 3.2. Analyzing projects You can analyze your projects by running the MTA plugin with a saved run configuration. Procedure In the MTA perspective, click the Run button ( ) and select a run configuration. The MTA plugin analyzes your projects. The Issue Explorer displays migration issues that are detected with the ruleset. When you have finished analyzing your projects, stop the MTA server in the Issue Explorer to conserve memory. 3.3. Reviewing issues You can review issues identified by the MTA plugin. Procedure Click Window Show View Issue Explorer . Optional: Filter the issues by clicking the Options menu , selecting Group By and an option. Right-click and select Issue Details to view information about the issue, including its severity and how to address it. The following icons indicate the severity and state of an issue: Table 3.1. Issue icons Icon Description The issue must be fixed for a successful migration. The issue is optional to fix for migration. The issue might need to be addressed during migration. The issue was resolved. The issue is stale. The code marked as an issue was modified since the last time that MTA identified it as an issue. A quick fix is available for this issue, which is mandatory to fix for a successful migration. A quick fix is available for this issue, which is optional to fix for migration. A quick fix is available for this issue, which may potentially be an issue during migration. Double-click an issue to open the associated line of code in an editor. 3.4. Resolving issues You can resolve issues detected by the MTA plugin by performing one of the following actions: You can double-click the issue to open it in an editor and edit the source code. The issue displays a Stale icon ( ) until the time you run the MTA plugin. You can right-click the issue and select Mark as Fixed . If the issue displays a Quick Fix icon ( ), you can right-click the issue and select Preview Quick Fix and then Apply Quick Fix .
null
https://docs.redhat.com/en/documentation/migration_toolkit_for_applications/7.1/html/eclipse_plugin_guide/analyzing-projects-with-plugin_eclipse-code-ready-studio-guide
Chapter 5. Using Container Storage Interface (CSI)
Chapter 5. Using Container Storage Interface (CSI) 5.1. Configuring CSI volumes The Container Storage Interface (CSI) allows Red Hat OpenShift Service on AWS to consume storage from storage back ends that implement the CSI interface as persistent storage. Note Red Hat OpenShift Service on AWS 4 supports version 1.6.0 of the CSI specification . 5.1.1. CSI architecture CSI drivers are typically shipped as container images. These containers are not aware of Red Hat OpenShift Service on AWS where they run. To use CSI-compatible storage back end in Red Hat OpenShift Service on AWS, the cluster administrator must deploy several components that serve as a bridge between Red Hat OpenShift Service on AWS and the storage driver. The following diagram provides a high-level overview about the components running in pods in the Red Hat OpenShift Service on AWS cluster. It is possible to run multiple CSI drivers for different storage back ends. Each driver needs its own external controllers deployment and daemon set with the driver and CSI registrar. 5.1.1.1. External CSI controllers External CSI controllers is a deployment that deploys one or more pods with five containers: The snapshotter container watches VolumeSnapshot and VolumeSnapshotContent objects and is responsible for the creation and deletion of VolumeSnapshotContent object. The resizer container is a sidecar container that watches for PersistentVolumeClaim updates and triggers ControllerExpandVolume operations against a CSI endpoint if you request more storage on PersistentVolumeClaim object. An external CSI attacher container translates attach and detach calls from Red Hat OpenShift Service on AWS to respective ControllerPublish and ControllerUnpublish calls to the CSI driver. An external CSI provisioner container that translates provision and delete calls from Red Hat OpenShift Service on AWS to respective CreateVolume and DeleteVolume calls to the CSI driver. A CSI driver container. The CSI attacher and CSI provisioner containers communicate with the CSI driver container using UNIX Domain Sockets, ensuring that no CSI communication leaves the pod. The CSI driver is not accessible from outside of the pod. Note The attach , detach , provision , and delete operations typically require the CSI driver to use credentials to the storage backend. Run the CSI controller pods on infrastructure nodes so the credentials are never leaked to user processes, even in the event of a catastrophic security breach on a compute node. Note The external attacher must also run for CSI drivers that do not support third-party attach or detach operations. The external attacher will not issue any ControllerPublish or ControllerUnpublish operations to the CSI driver. However, it still must run to implement the necessary Red Hat OpenShift Service on AWS attachment API. 5.1.1.2. CSI driver daemon set The CSI driver daemon set runs a pod on every node that allows Red Hat OpenShift Service on AWS to mount storage provided by the CSI driver to the node and use it in user workloads (pods) as persistent volumes (PVs). The pod with the CSI driver installed contains the following containers: A CSI driver registrar, which registers the CSI driver into the openshift-node service running on the node. The openshift-node process running on the node then directly connects with the CSI driver using the UNIX Domain Socket available on the node. A CSI driver. The CSI driver deployed on the node should have as few credentials to the storage back end as possible. Red Hat OpenShift Service on AWS will only use the node plugin set of CSI calls such as NodePublish / NodeUnpublish and NodeStage / NodeUnstage , if these calls are implemented. 5.1.2. CSI drivers supported by Red Hat OpenShift Service on AWS Red Hat OpenShift Service on AWS installs certain CSI drivers by default, giving users storage options that are not possible with in-tree volume plugins. To create CSI-provisioned persistent volumes that mount to these supported storage assets, Red Hat OpenShift Service on AWS installs the necessary CSI driver Operator, the CSI driver, and the required storage class by default. For more details about the default namespace of the Operator and driver, see the documentation for the specific CSI Driver Operator. The following table describes the CSI drivers that are installed with Red Hat OpenShift Service on AWS and which CSI features they support, such as volume snapshots and resize. In addition to the drivers listed in the following table, ROSA functions with CSI drivers from third-party storage vendors. Red Hat does not oversee third-party provisioners or the connected CSI drivers and the vendors fully control source code, deployment, operation, and Kubernetes compatibility. These volume provisioners are considered customer-managed and the respective vendors are responsible for providing support. See the Shared responsibilities for Red Hat OpenShift Service on AWS matrix for more information. Table 5.1. Supported CSI drivers and features in Red Hat OpenShift Service on AWS CSI driver CSI volume snapshots CSI volume group snapshots [1] CSI cloning CSI resize Inline ephemeral volumes AWS EBS ✅ ✅ AWS EFS LVM Storage ✅ ✅ ✅ 5.1.3. Dynamic provisioning Dynamic provisioning of persistent storage depends on the capabilities of the CSI driver and underlying storage back end. The provider of the CSI driver should document how to create a storage class in Red Hat OpenShift Service on AWS and the parameters available for configuration. The created storage class can be configured to enable dynamic provisioning. Procedure Create a default storage class that ensures all PVCs that do not require any special storage class are provisioned by the installed CSI driver. # oc create -f - << EOF apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class> 1 annotations: storageclass.kubernetes.io/is-default-class: "true" provisioner: <provisioner-name> 2 parameters: EOF 1 The name of the storage class that will be created. 2 The name of the CSI driver that has been installed. 5.1.4. Example using the CSI driver The following example installs a default MySQL template without any changes to the template. Prerequisites The CSI driver has been deployed. A storage class has been created for dynamic provisioning. Procedure Create the MySQL template: # oc new-app mysql-persistent Example output --> Deploying template "openshift/mysql-persistent" to project default ... # oc get pvc Example output NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE mysql Bound kubernetes-dynamic-pv-3271ffcb4e1811e8 1Gi RWO cinder 3s 5.2. Managing the default storage class 5.2.1. Overview Managing the default storage class allows you to accomplish several different objectives: Enforcing static provisioning by disabling dynamic provisioning. When you have other preferred storage classes, preventing the storage operator from re-creating the initial default storage class. Renaming, or otherwise changing, the default storage class To accomplish these objectives, you change the setting for the spec.storageClassState field in the ClusterCSIDriver object. The possible settings for this field are: Managed : (Default) The Container Storage Interface (CSI) operator is actively managing its default storage class, so that most manual changes made by a cluster administrator to the default storage class are removed, and the default storage class is continuously re-created if you attempt to manually delete it. Unmanaged : You can modify the default storage class. The CSI operator is not actively managing storage classes, so that it is not reconciling the default storage class it creates automatically. Removed : The CSI operators deletes the default storage class. 5.2.2. Managing the default storage class using the web console Prerequisites Access to the Red Hat OpenShift Service on AWS web console. Access to the cluster with cluster-admin privileges. Procedure To manage the default storage class using the web console: Log in to the web console. Click Administration > CustomResourceDefinitions . On the CustomResourceDefinitions page, type clustercsidriver to find the ClusterCSIDriver object. Click ClusterCSIDriver , and then click the Instances tab. Click the name of the desired instance, and then click the YAML tab. Add the spec.storageClassState field with a value of Managed , Unmanaged , or Removed . Example ... spec: driverConfig: driverType: '' logLevel: Normal managementState: Managed observedConfig: null operatorLogLevel: Normal storageClassState: Unmanaged 1 ... 1 spec.storageClassState field set to "Unmanaged" Click Save . 5.2.3. Managing the default storage class using the CLI Prerequisites Access to the cluster with cluster-admin privileges. Procedure To manage the storage class using the CLI, run the following command: oc patch clustercsidriver USDDRIVERNAME --type=merge -p "{\"spec\":{\"storageClassState\":\"USD{STATE}\"}}" 1 1 Where USD{STATE} is "Removed" or "Managed" or "Unmanaged". Where USDDRIVERNAME is the provisioner name. You can find the provisioner name by running the command oc get sc . 5.2.4. Absent or multiple default storage classes 5.2.4.1. Multiple default storage classes Multiple default storage classes can occur if you mark a non-default storage class as default and do not unset the existing default storage class, or you create a default storage class when a default storage class is already present. With multiple default storage classes present, any persistent volume claim (PVC) requesting the default storage class ( pvc.spec.storageClassName =nil) gets the most recently created default storage class, regardless of the default status of that storage class, and the administrator receives an alert in the alerts dashboard that there are multiple default storage classes, MultipleDefaultStorageClasses . 5.2.4.2. Absent default storage class There are two possible scenarios where PVCs can attempt to use a non-existent default storage class: An administrator removes the default storage class or marks it as non-default, and then a user creates a PVC requesting the default storage class. During installation, the installer creates a PVC requesting the default storage class, which has not yet been created. In the preceding scenarios, PVCs remain in the pending state indefinitely. To resolve this situation, create a default storage class or declare one of the existing storage classes as the default. As soon as the default storage class is created or declared, the PVCs get the new default storage class. If possible, the PVCs eventually bind to statically or dynamically provisioned PVs as usual, and move out of the pending state. 5.2.5. Changing the default storage class Use the following procedure to change the default storage class. For example, if you have two defined storage classes, gp3 and standard , and you want to change the default storage class from gp3 to standard . Prerequisites Access to the cluster with cluster-admin privileges. Procedure To change the default storage class: List the storage classes: USD oc get storageclass Example output NAME TYPE gp3 (default) kubernetes.io/aws-ebs 1 standard kubernetes.io/aws-ebs 1 (default) indicates the default storage class. Make the desired storage class the default. For the desired storage class, set the storageclass.kubernetes.io/is-default-class annotation to true by running the following command: USD oc patch storageclass standard -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "true"}}}' Note You can have multiple default storage classes for a short time. However, you should ensure that only one default storage class exists eventually. With multiple default storage classes present, any persistent volume claim (PVC) requesting the default storage class ( pvc.spec.storageClassName =nil) gets the most recently created default storage class, regardless of the default status of that storage class, and the administrator receives an alert in the alerts dashboard that there are multiple default storage classes, MultipleDefaultStorageClasses . Remove the default storage class setting from the old default storage class. For the old default storage class, change the value of the storageclass.kubernetes.io/is-default-class annotation to false by running the following command: USD oc patch storageclass gp3 -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "false"}}}' Verify the changes: USD oc get storageclass Example output NAME TYPE gp3 kubernetes.io/aws-ebs standard (default) kubernetes.io/aws-ebs 5.3. AWS Elastic Block Store CSI Driver Operator 5.3.1. Overview Red Hat OpenShift Service on AWS is capable of provisioning persistent volumes (PVs) using the AWS EBS CSI driver . Familiarity with persistent storage and configuring CSI volumes is recommended when working with a Container Storage Interface (CSI) Operator and driver. To create CSI-provisioned PVs that mount to AWS EBS storage assets, Red Hat OpenShift Service on AWS installs the AWS EBS CSI Driver Operator (a Red Hat operator) and the AWS EBS CSI driver by default in the openshift-cluster-csi-drivers namespace. The AWS EBS CSI Driver Operator provides a StorageClass by default that you can use to create PVCs. You can disable this default storage class if desired (see Managing the default storage class ). You also have the option to create the AWS EBS StorageClass as described in Persistent storage using Amazon Elastic Block Store . The AWS EBS CSI driver enables you to create and mount AWS EBS PVs. 5.3.2. About CSI Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plugins using a standard interface without ever having to change the core Kubernetes code. CSI Operators give Red Hat OpenShift Service on AWS users storage options, such as volume snapshots, that are not possible with in-tree volume plugins. Important Red Hat OpenShift Service on AWS defaults to using the CSI plugin to provision Amazon Elastic Block Store (Amazon EBS) storage. For information about dynamically provisioning AWS EBS persistent volumes in Red Hat OpenShift Service on AWS, see Persistent storage using Amazon Elastic Block Store . Additional resources Persistent storage using Amazon Elastic Block Store Configuring CSI volumes 5.4. AWS Elastic File Service CSI Driver Operator Important This procedure is specific to the AWS EFS CSI Driver Operator (a Red Hat Operator), which is only applicable for Red Hat OpenShift Service on AWS 4.10 and later versions. 5.4.1. Overview Red Hat OpenShift Service on AWS is capable of provisioning persistent volumes (PVs) using the Container Storage Interface (CSI) driver for AWS Elastic File Service (EFS). Familiarity with persistent storage and configuring CSI volumes is recommended when working with a CSI Operator and driver. After installing the AWS EFS CSI Driver Operator, Red Hat OpenShift Service on AWS installs the AWS EFS CSI Operator and the AWS EFS CSI driver by default in the openshift-cluster-csi-drivers namespace. This allows the AWS EFS CSI Driver Operator to create CSI-provisioned PVs that mount to AWS EFS assets. The AWS EFS CSI Driver Operator , after being installed, does not create a storage class by default to use to create persistent volume claims (PVCs). However, you can manually create the AWS EFS StorageClass . The AWS EFS CSI Driver Operator supports dynamic volume provisioning by allowing storage volumes to be created on-demand. This eliminates the need for cluster administrators to pre-provision storage. The AWS EFS CSI driver enables you to create and mount AWS EFS PVs. Note AWS EFS only supports regional volumes, not zonal volumes. 5.4.2. About CSI Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plugins using a standard interface without ever having to change the core Kubernetes code. CSI Operators give Red Hat OpenShift Service on AWS users storage options, such as volume snapshots, that are not possible with in-tree volume plugins. 5.4.3. Setting up the AWS EFS CSI Driver Operator If you are using AWS EFS with AWS Secure Token Service (STS), obtain a role Amazon Resource Name (ARN) for STS. This is required for installing the AWS EFS CSI Driver Operator. Install the AWS EFS CSI Driver Operator. Install the AWS EFS CSI Driver. 5.4.3.1. Obtaining a role Amazon Resource Name for Security Token Service This procedure explains how to obtain a role Amazon Resource Name (ARN) to configure the AWS EFS CSI Driver Operator with Red Hat OpenShift Service on AWS on AWS Security Token Service (STS). Important Perform this procedure before you install the AWS EFS CSI Driver Operator (see Installing the AWS EFS CSI Driver Operator procedure). Prerequisites Access to the cluster as a user with the cluster-admin role. AWS account credentials Procedure Create an IAM policy JSON file with the following content: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "elasticfilesystem:DescribeAccessPoints", "elasticfilesystem:DescribeFileSystems", "elasticfilesystem:DescribeMountTargets", "ec2:DescribeAvailabilityZones", "elasticfilesystem:TagResource" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "elasticfilesystem:CreateAccessPoint" ], "Resource": "*", "Condition": { "StringLike": { "aws:RequestTag/efs.csi.aws.com/cluster": "true" } } }, { "Effect": "Allow", "Action": "elasticfilesystem:DeleteAccessPoint", "Resource": "*", "Condition": { "StringEquals": { "aws:ResourceTag/efs.csi.aws.com/cluster": "true" } } } ] } Create an IAM trust JSON file with the following content: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Federated": "arn:aws:iam::<your_aws_account_ID>:oidc-provider/<openshift_oidc_provider>" 1 }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringEquals": { "<openshift_oidc_provider>:sub": [ 2 "system:serviceaccount:openshift-cluster-csi-drivers:aws-efs-csi-driver-operator", "system:serviceaccount:openshift-cluster-csi-drivers:aws-efs-csi-driver-controller-sa" ] } } } ] } 1 Specify your AWS account ID and the OpenShift OIDC provider endpoint. Obtain your AWS account ID by running the following command: USD aws sts get-caller-identity --query Account --output text Obtain the OpenShift OIDC endpoint by running the following command: USD rosa describe cluster \ -c USD(oc get clusterversion -o jsonpath='{.items[].spec.clusterID}{"\n"}') \ -o yaml | awk '/oidc_endpoint_url/ {print USD2}' | cut -d '/' -f 3,4 2 Specify the OpenShift OIDC endpoint again. Create the IAM role: ROLE_ARN=USD(aws iam create-role \ --role-name "<your_cluster_name>-aws-efs-csi-operator" \ --assume-role-policy-document file://<your_trust_file_name>.json \ --query "Role.Arn" --output text); echo USDROLE_ARN Copy the role ARN. You will need it when you install the AWS EFS CSI Driver Operator. Create the IAM policy: POLICY_ARN=USD(aws iam create-policy \ --policy-name "<your_cluster_name>-aws-efs-csi" \ --policy-document file://<your_policy_file_name>.json \ --query 'Policy.Arn' --output text); echo USDPOLICY_ARN Attach the IAM policy to the IAM role: USD aws iam attach-role-policy \ --role-name "<your_cluster_name>-aws-efs-csi-operator" \ --policy-arn USDPOLICY_ARN steps Install the AWS EFS CSI Driver Operator . Additional resources Installing the AWS EFS CSI Driver Operator Installing the AWS EFS CSI Driver 5.4.3.2. Installing the AWS EFS CSI Driver Operator The AWS EFS CSI Driver Operator (a Red Hat Operator) is not installed in Red Hat OpenShift Service on AWS by default. Use the following procedure to install and configure the AWS EFS CSI Driver Operator in your cluster. Prerequisites Access to the Red Hat OpenShift Service on AWS web console. Procedure To install the AWS EFS CSI Driver Operator from the web console: Log in to the web console. Install the AWS EFS CSI Operator: Click Operators OperatorHub . Locate the AWS EFS CSI Operator by typing AWS EFS CSI in the filter box. Click the AWS EFS CSI Driver Operator button. Important Be sure to select the AWS EFS CSI Driver Operator and not the AWS EFS Operator . The AWS EFS Operator is a community Operator and is not supported by Red Hat. On the AWS EFS CSI Driver Operator page, click Install . On the Install Operator page, ensure that: If you are using AWS EFS with AWS Secure Token Service (STS), in the role ARN field, enter the ARN role copied from the last step of the Obtaining a role Amazon Resource Name for Security Token Service procedure. All namespaces on the cluster (default) is selected. Installed Namespace is set to openshift-cluster-csi-drivers . Click Install . After the installation finishes, the AWS EFS CSI Operator is listed in the Installed Operators section of the web console. steps Install the AWS EFS CSI Driver . 5.4.3.3. Installing the AWS EFS CSI Driver After installing the AWS EFS CSI Driver Operator (a Red Hat operator), you install the AWS EFS CSI driver . Prerequisites Access to the Red Hat OpenShift Service on AWS web console. Procedure Click Administration CustomResourceDefinitions ClusterCSIDriver . On the Instances tab, click Create ClusterCSIDriver . Use the following YAML file: apiVersion: operator.openshift.io/v1 kind: ClusterCSIDriver metadata: name: efs.csi.aws.com spec: managementState: Managed Click Create . Wait for the following Conditions to change to a "True" status: AWSEFSDriverNodeServiceControllerAvailable AWSEFSDriverControllerServiceControllerAvailable 5.4.4. Creating the AWS EFS storage class Storage classes are used to differentiate and delineate storage levels and usages. By defining a storage class, users can obtain dynamically provisioned persistent volumes. The AWS EFS CSI Driver Operator (a Red Hat operator) , after being installed, does not create a storage class by default. However, you can manually create the AWS EFS storage class. 5.4.4.1. Creating the AWS EFS storage class using the console Procedure In the Red Hat OpenShift Service on AWS console, click Storage StorageClasses . On the StorageClasses page, click Create StorageClass . On the StorageClass page, perform the following steps: Enter a name to reference the storage class. Optional: Enter the description. Select the reclaim policy. Select efs.csi.aws.com from the Provisioner drop-down list. Optional: Set the configuration parameters for the selected provisioner. Click Create . 5.4.4.2. Creating the AWS EFS storage class using the CLI Procedure Create a StorageClass object: kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: efs-sc provisioner: efs.csi.aws.com parameters: provisioningMode: efs-ap 1 fileSystemId: fs-a5324911 2 directoryPerms: "700" 3 gidRangeStart: "1000" 4 gidRangeEnd: "2000" 5 basePath: "/dynamic_provisioning" 6 1 provisioningMode must be efs-ap to enable dynamic provisioning. 2 fileSystemId must be the ID of the EFS volume created manually. 3 directoryPerms is the default permission of the root directory of the volume. In this example, the volume is accessible only by the owner. 4 5 gidRangeStart and gidRangeEnd set the range of POSIX Group IDs (GIDs) that are used to set the GID of the AWS access point. If not specified, the default range is 50000-7000000. Each provisioned volume, and thus AWS access point, is assigned a unique GID from this range. 6 basePath is the directory on the EFS volume that is used to create dynamically provisioned volumes. In this case, a PV is provisioned as "/dynamic_provisioning/<random uuid>" on the EFS volume. Only the subdirectory is mounted to pods that use the PV. Note A cluster admin can create several StorageClass objects, each using a different EFS volume. 5.4.5. Creating and configuring access to EFS volumes in AWS This procedure explains how to create and configure EFS volumes in AWS so that you can use them in Red Hat OpenShift Service on AWS. Prerequisites AWS account credentials Procedure To create and configure access to an EFS volume in AWS: On the AWS console, open https://console.aws.amazon.com/efs . Click Create file system : Enter a name for the file system. For Virtual Private Cloud (VPC) , select your Red Hat OpenShift Service on AWS's' virtual private cloud (VPC). Accept default settings for all other selections. Wait for the volume and mount targets to finish being fully created: Go to https://console.aws.amazon.com/efs#/file-systems . Click your volume, and on the Network tab wait for all mount targets to become available (~1-2 minutes). On the Network tab, copy the Security Group ID (you will need this in the step). Go to https://console.aws.amazon.com/ec2/v2/home#SecurityGroups , and find the Security Group used by the EFS volume. On the Inbound rules tab, click Edit inbound rules , and then add a new rule with the following settings to allow Red Hat OpenShift Service on AWS nodes to access EFS volumes : Type : NFS Protocol : TCP Port range : 2049 Source : Custom/IP address range of your nodes (for example: "10.0.0.0/16") This step allows Red Hat OpenShift Service on AWS to use NFS ports from the cluster. Save the rule. 5.4.6. Dynamic provisioning for Amazon Elastic File Storage The AWS EFS CSI driver supports a different form of dynamic provisioning than other CSI drivers. It provisions new PVs as subdirectories of a pre-existing EFS volume. The PVs are independent of each other. However, they all share the same EFS volume. When the volume is deleted, all PVs provisioned out of it are deleted too. The EFS CSI driver creates an AWS Access Point for each such subdirectory. Due to AWS AccessPoint limits, you can only dynamically provision 1000 PVs from a single StorageClass /EFS volume. Important Note that PVC.spec.resources is not enforced by EFS. In the example below, you request 5 GiB of space. However, the created PV is limitless and can store any amount of data (like petabytes). A broken application, or even a rogue application, can cause significant expenses when it stores too much data on the volume. Using monitoring of EFS volume sizes in AWS is strongly recommended. Prerequisites You have created Amazon Elastic File Storage (Amazon EFS) volumes. You have created the AWS EFS storage class. Procedure To enable dynamic provisioning: Create a PVC (or StatefulSet or Template) as usual, referring to the StorageClass created previously. apiVersion: v1 kind: PersistentVolumeClaim metadata: name: test spec: storageClassName: efs-sc accessModes: - ReadWriteMany resources: requests: storage: 5Gi If you have problems setting up dynamic provisioning, see AWS EFS troubleshooting . Additional resources Creating and configuring access to AWS EFS volume(s) Creating the AWS EFS storage class 5.4.7. Creating static PVs with Amazon Elastic File Storage It is possible to use an Amazon Elastic File Storage (Amazon EFS) volume as a single PV without any dynamic provisioning. The whole volume is mounted to pods. Prerequisites You have created Amazon EFS volumes. Procedure Create the PV using the following YAML file: apiVersion: v1 kind: PersistentVolume metadata: name: efs-pv spec: capacity: 1 storage: 5Gi volumeMode: Filesystem accessModes: - ReadWriteMany - ReadWriteOnce persistentVolumeReclaimPolicy: Retain csi: driver: efs.csi.aws.com volumeHandle: fs-ae66151a 2 volumeAttributes: encryptInTransit: "false" 3 1 spec.capacity does not have any meaning and is ignored by the CSI driver. It is used only when binding to a PVC. Applications can store any amount of data to the volume. 2 volumeHandle must be the same ID as the EFS volume you created in AWS. If you are providing your own access point, volumeHandle should be <EFS volume ID>::<access point ID> . For example: fs-6e633ada::fsap-081a1d293f0004630 . 3 If desired, you can disable encryption in transit. Encryption is enabled by default. If you have problems setting up static PVs, see AWS EFS troubleshooting . 5.4.8. Amazon Elastic File Storage security The following information is important for Amazon Elastic File Storage (Amazon EFS) security. When using access points, for example, by using dynamic provisioning as described earlier, Amazon automatically replaces GIDs on files with the GID of the access point. In addition, EFS considers the user ID, group ID, and secondary group IDs of the access point when evaluating file system permissions. EFS ignores the NFS client's IDs. For more information about access points, see https://docs.aws.amazon.com/efs/latest/ug/efs-access-points.html . As a consequence, EFS volumes silently ignore FSGroup; Red Hat OpenShift Service on AWS is not able to replace the GIDs of files on the volume with FSGroup. Any pod that can access a mounted EFS access point can access any file on it. Unrelated to this, encryption in transit is enabled by default. For more information, see https://docs.aws.amazon.com/efs/latest/ug/encryption-in-transit.html . 5.4.9. AWS EFS storage CSI usage metrics 5.4.9.1. Usage metrics overview Amazon Web Services (AWS) Elastic File Service (EFS) storage Container Storage Interface (CSI) usage metrics allow you to monitor how much space is used by either dynamically or statically provisioned EFS volumes. Important This features is disabled by default, because turning on metrics can lead to performance degradation. The AWS EFS usage metrics feature collects volume metrics in the AWS EFS CSI Driver by recursively walking through the files in the volume. Because this effort can degrade performance, administrators must explicitly enable this feature. 5.4.9.2. Enabling usage metrics using the web console To enable Amazon Web Services (AWS) Elastic File Service (EFS) Storage Container Storage Interface (CSI) usage metrics using the web console: Click Administration > CustomResourceDefinitions . On the CustomResourceDefinitions page to the Name dropdown box, type clustercsidriver . Click CRD ClusterCSIDriver . Click the YAML tab. Under spec.aws.efsVolumeMetrics.state , set the value to RecursiveWalk . RecursiveWalk indicates that volume metrics collection in the AWS EFS CSI Driver is performed by recursively walking through the files in the volume. Example ClusterCSIDriver efs.csi.aws.com YAML file spec: driverConfig: driverType: AWS aws: efsVolumeMetrics: state: RecursiveWalk recursiveWalk: refreshPeriodMinutes: 100 fsRateLimit: 10 Optional: To define how the recursive walk operates, you can also set the following fields: refreshPeriodMinutes : Specifies the refresh frequency for volume metrics in minutes. If this field is left blank, a reasonable default is chosen, which is subject to change over time. The current default is 240 minutes. The valid range is 1 to 43,200 minutes. fsRateLimit : Defines the rate limit for processing volume metrics in goroutines per file system. If this field is left blank, a reasonable default is chosen, which is subject to change over time. The current default is 5 goroutines. The valid range is 1 to 100 goroutines. Click Save . Note To disable AWS EFS CSI usage metrics, use the preceding procedure, but for spec.aws.efsVolumeMetrics.state , change the value from RecursiveWalk to Disabled . 5.4.9.3. Enabling usage metrics using the CLI To enable Amazon Web Services (AWS) Elastic File Service (EFS) storage Container Storage Interface (CSI) usage metrics using the CLI: Edit ClusterCSIDriver by running the following command: USD oc edit clustercsidriver efs.csi.aws.com Under spec.aws.efsVolumeMetrics.state , set the value to RecursiveWalk . RecursiveWalk indicates that volume metrics collection in the AWS EFS CSI Driver is performed by recursively walking through the files in the volume. Example ClusterCSIDriver efs.csi.aws.com YAML file spec: driverConfig: driverType: AWS aws: efsVolumeMetrics: state: RecursiveWalk recursiveWalk: refreshPeriodMinutes: 100 fsRateLimit: 10 Optional: To define how the recursive walk operates, you can also set the following fields: refreshPeriodMinutes : Specifies the refresh frequency for volume metrics in minutes. If this field is left blank, a reasonable default is chosen, which is subject to change over time. The current default is 240 minutes. The valid range is 1 to 43,200 minutes. fsRateLimit : Defines the rate limit for processing volume metrics in goroutines per file system. If this field is left blank, a reasonable default is chosen, which is subject to change over time. The current default is 5 goroutines. The valid range is 1 to 100 goroutines. Save the changes to the efs.csi.aws.com object. Note To disable AWS EFS CSI usage metrics, use the preceding procedure, but for spec.aws.efsVolumeMetrics.state , change the value from RecursiveWalk to Disabled . 5.4.10. Amazon Elastic File Storage troubleshooting The following information provides guidance on how to troubleshoot issues with Amazon Elastic File Storage (Amazon EFS): The AWS EFS Operator and CSI driver run in namespace openshift-cluster-csi-drivers . To initiate gathering of logs of the AWS EFS Operator and CSI driver, run the following command: USD oc adm must-gather [must-gather ] OUT Using must-gather plugin-in image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:125f183d13601537ff15b3239df95d47f0a604da2847b561151fedd699f5e3a5 [must-gather ] OUT namespace/openshift-must-gather-xm4wq created [must-gather ] OUT clusterrolebinding.rbac.authorization.k8s.io/must-gather-2bd8x created [must-gather ] OUT pod for plug-in image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:125f183d13601537ff15b3239df95d47f0a604da2847b561151fedd699f5e3a5 created To show AWS EFS Operator errors, view the ClusterCSIDriver status: USD oc get clustercsidriver efs.csi.aws.com -o yaml If a volume cannot be mounted to a pod (as shown in the output of the following command): USD oc describe pod ... Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 2m13s default-scheduler Successfully assigned default/efs-app to ip-10-0-135-94.ec2.internal Warning FailedMount 13s kubelet MountVolume.SetUp failed for volume "pvc-d7c097e6-67ec-4fae-b968-7e7056796449" : rpc error: code = DeadlineExceeded desc = context deadline exceeded 1 Warning FailedMount 10s kubelet Unable to attach or mount volumes: unmounted volumes=[persistent-storage], unattached volumes=[persistent-storage kube-api-access-9j477]: timed out waiting for the condition 1 Warning message indicating volume not mounted. This error is frequently caused by AWS dropping packets between an Red Hat OpenShift Service on AWS node and Amazon EFS. Check that the following are correct: AWS firewall and Security Groups Networking: port number and IP addresses 5.4.11. Uninstalling the AWS EFS CSI Driver Operator All EFS PVs are inaccessible after uninstalling the AWS EFS CSI Driver Operator (a Red Hat operator). Prerequisites Access to the Red Hat OpenShift Service on AWS web console. Procedure To uninstall the AWS EFS CSI Driver Operator from the web console: Log in to the web console. Stop all applications that use AWS EFS PVs. Delete all AWS EFS PVs: Click Storage PersistentVolumeClaims . Select each PVC that is in use by the AWS EFS CSI Driver Operator, click the drop-down menu on the far right of the PVC, and then click Delete PersistentVolumeClaims . Uninstall the AWS EFS CSI driver : Note Before you can uninstall the Operator, you must remove the CSI driver first. Click Administration CustomResourceDefinitions ClusterCSIDriver . On the Instances tab, for efs.csi.aws.com , on the far left side, click the drop-down menu, and then click Delete ClusterCSIDriver . When prompted, click Delete . Uninstall the AWS EFS CSI Operator: Click Operators Installed Operators . On the Installed Operators page, scroll or type AWS EFS CSI into the Search by name box to find the Operator, and then click it. On the upper, right of the Installed Operators > Operator details page, click Actions Uninstall Operator . When prompted on the Uninstall Operator window, click the Uninstall button to remove the Operator from the namespace. Any applications deployed by the Operator on the cluster need to be cleaned up manually. After uninstalling, the AWS EFS CSI Driver Operator is no longer listed in the Installed Operators section of the web console. Note Before you can destroy a cluster ( openshift-install destroy cluster ), you must delete the EFS volume in AWS. An Red Hat OpenShift Service on AWS cluster cannot be destroyed when there is an EFS volume that uses the cluster's VPC. Amazon does not allow deletion of such a VPC. 5.4.12. Additional resources Configuring CSI volumes
[ "oc create -f - << EOF apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class> 1 annotations: storageclass.kubernetes.io/is-default-class: \"true\" provisioner: <provisioner-name> 2 parameters: EOF", "oc new-app mysql-persistent", "--> Deploying template \"openshift/mysql-persistent\" to project default", "oc get pvc", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE mysql Bound kubernetes-dynamic-pv-3271ffcb4e1811e8 1Gi RWO cinder 3s", "spec: driverConfig: driverType: '' logLevel: Normal managementState: Managed observedConfig: null operatorLogLevel: Normal storageClassState: Unmanaged 1", "patch clustercsidriver USDDRIVERNAME --type=merge -p \"{\\\"spec\\\":{\\\"storageClassState\\\":\\\"USD{STATE}\\\"}}\" 1", "oc get storageclass", "NAME TYPE gp3 (default) kubernetes.io/aws-ebs 1 standard kubernetes.io/aws-ebs", "oc patch storageclass standard -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"true\"}}}'", "oc patch storageclass gp3 -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"false\"}}}'", "oc get storageclass", "NAME TYPE gp3 kubernetes.io/aws-ebs standard (default) kubernetes.io/aws-ebs", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"elasticfilesystem:DescribeAccessPoints\", \"elasticfilesystem:DescribeFileSystems\", \"elasticfilesystem:DescribeMountTargets\", \"ec2:DescribeAvailabilityZones\", \"elasticfilesystem:TagResource\" ], \"Resource\": \"*\" }, { \"Effect\": \"Allow\", \"Action\": [ \"elasticfilesystem:CreateAccessPoint\" ], \"Resource\": \"*\", \"Condition\": { \"StringLike\": { \"aws:RequestTag/efs.csi.aws.com/cluster\": \"true\" } } }, { \"Effect\": \"Allow\", \"Action\": \"elasticfilesystem:DeleteAccessPoint\", \"Resource\": \"*\", \"Condition\": { \"StringEquals\": { \"aws:ResourceTag/efs.csi.aws.com/cluster\": \"true\" } } } ] }", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"Federated\": \"arn:aws:iam::<your_aws_account_ID>:oidc-provider/<openshift_oidc_provider>\" 1 }, \"Action\": \"sts:AssumeRoleWithWebIdentity\", \"Condition\": { \"StringEquals\": { \"<openshift_oidc_provider>:sub\": [ 2 \"system:serviceaccount:openshift-cluster-csi-drivers:aws-efs-csi-driver-operator\", \"system:serviceaccount:openshift-cluster-csi-drivers:aws-efs-csi-driver-controller-sa\" ] } } } ] }", "aws sts get-caller-identity --query Account --output text", "rosa describe cluster -c USD(oc get clusterversion -o jsonpath='{.items[].spec.clusterID}{\"\\n\"}') -o yaml | awk '/oidc_endpoint_url/ {print USD2}' | cut -d '/' -f 3,4", "ROLE_ARN=USD(aws iam create-role --role-name \"<your_cluster_name>-aws-efs-csi-operator\" --assume-role-policy-document file://<your_trust_file_name>.json --query \"Role.Arn\" --output text); echo USDROLE_ARN", "POLICY_ARN=USD(aws iam create-policy --policy-name \"<your_cluster_name>-aws-efs-csi\" --policy-document file://<your_policy_file_name>.json --query 'Policy.Arn' --output text); echo USDPOLICY_ARN", "aws iam attach-role-policy --role-name \"<your_cluster_name>-aws-efs-csi-operator\" --policy-arn USDPOLICY_ARN", "apiVersion: operator.openshift.io/v1 kind: ClusterCSIDriver metadata: name: efs.csi.aws.com spec: managementState: Managed", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: efs-sc provisioner: efs.csi.aws.com parameters: provisioningMode: efs-ap 1 fileSystemId: fs-a5324911 2 directoryPerms: \"700\" 3 gidRangeStart: \"1000\" 4 gidRangeEnd: \"2000\" 5 basePath: \"/dynamic_provisioning\" 6", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: test spec: storageClassName: efs-sc accessModes: - ReadWriteMany resources: requests: storage: 5Gi", "apiVersion: v1 kind: PersistentVolume metadata: name: efs-pv spec: capacity: 1 storage: 5Gi volumeMode: Filesystem accessModes: - ReadWriteMany - ReadWriteOnce persistentVolumeReclaimPolicy: Retain csi: driver: efs.csi.aws.com volumeHandle: fs-ae66151a 2 volumeAttributes: encryptInTransit: \"false\" 3", "spec: driverConfig: driverType: AWS aws: efsVolumeMetrics: state: RecursiveWalk recursiveWalk: refreshPeriodMinutes: 100 fsRateLimit: 10", "oc edit clustercsidriver efs.csi.aws.com", "spec: driverConfig: driverType: AWS aws: efsVolumeMetrics: state: RecursiveWalk recursiveWalk: refreshPeriodMinutes: 100 fsRateLimit: 10", "oc adm must-gather [must-gather ] OUT Using must-gather plugin-in image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:125f183d13601537ff15b3239df95d47f0a604da2847b561151fedd699f5e3a5 [must-gather ] OUT namespace/openshift-must-gather-xm4wq created [must-gather ] OUT clusterrolebinding.rbac.authorization.k8s.io/must-gather-2bd8x created [must-gather ] OUT pod for plug-in image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:125f183d13601537ff15b3239df95d47f0a604da2847b561151fedd699f5e3a5 created", "oc get clustercsidriver efs.csi.aws.com -o yaml", "oc describe pod Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 2m13s default-scheduler Successfully assigned default/efs-app to ip-10-0-135-94.ec2.internal Warning FailedMount 13s kubelet MountVolume.SetUp failed for volume \"pvc-d7c097e6-67ec-4fae-b968-7e7056796449\" : rpc error: code = DeadlineExceeded desc = context deadline exceeded 1 Warning FailedMount 10s kubelet Unable to attach or mount volumes: unmounted volumes=[persistent-storage], unattached volumes=[persistent-storage kube-api-access-9j477]: timed out waiting for the condition" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/storage/using-container-storage-interface-csi
Chapter 73. PasswordSecretSource schema reference
Chapter 73. PasswordSecretSource schema reference Used in: KafkaClientAuthenticationOAuth , KafkaClientAuthenticationPlain , KafkaClientAuthenticationScramSha256 , KafkaClientAuthenticationScramSha512 Property Description password The name of the key in the Secret under which the password is stored. string secretName The name of the Secret containing the password. string
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-PasswordSecretSource-reference
Chapter 1. Release notes for Red Hat Advanced Cluster Management
Chapter 1. Release notes for Red Hat Advanced Cluster Management Learn about new features and enhancements, support, deprecations, removals, and Errata bug fixes. Important: Cluster lifecycle components and features are within the multicluster engine operator, which is a software operator that enhances cluster fleet management. Release notes for multicluster engine operator-specific features are found in at Release notes for Cluster lifecycle with multicluster engine operator . What's new for Red Hat Advanced Cluster Management Errata updates for Red Hat Advanced Cluster Management Known issues and limitations for Red Hat Advanced Cluster Management Deprecations and removals for Red Hat Advanced Cluster Management Red Hat Advanced Cluster Management for Kubernetes considerations for GDPR readiness FIPS readiness Observability support Important: OpenShift Container Platform release notes are not documented in this product documentation. For your OpenShift Container Platform cluster, see OpenShift Container Platform release notes . Deprecated: Red Hat Advanced Cluster Management 2.8 and earlier versions are no longer supported. The documentation might remain available, but without any Errata or other updates. Best practice: Upgrade to the most recent version. The documentation references the earliest supported Red Hat OpenShift Container Platform versions, unless the component in the documentation is created and tested with only a specific version of OpenShift Container Platform. For full support information, see the Red Hat Advanced Cluster Management Support Matrix and the Lifecycle and update policies for Red Hat Advanced Cluster Management for Kubernetes . If you experience issues with one of the currently supported releases, or the product documentation, go to Red Hat Support where you can troubleshoot, view Knowledgebase articles, connect with the Support Team, or open a case. You must log in with your credentials. You can also learn more about the Customer Portal documentation at Red Hat Customer Portal FAQ . 1.1. What's new for Red Hat Advanced Cluster Management Red Hat Advanced Cluster Management for Kubernetes provides visibility of your entire Kubernetes domain with built-in governance, cluster lifecycle management, and application lifecycle management, along with observability. Important: Red Hat Advanced Cluster Management now supports all providers that are certified through the Cloud Native Computing Foundation (CNCF) Kubernetes Conformance Program. Choose a vendor that is recognized by CNFC for your hybrid cloud multicluster management. See the following information about using CNFC providers: Learn how CNFC providers are certified at Certified Kubernetes Conformance . For Red Hat support information about CNFC third-party providers, see Red Hat support with third party components , or Contact Red Hat support . If you bring your own CNFC conformance certified cluster, you need to change the OpenShift Container Platform CLI oc command to the Kubernetes CLI command, kubectl . 1.1.1. New features and enhancements for components Learn specific details about new features for components within Red Hat Advanced Cluster Management: Installation Console Clusters multicluster global hub Observability Governance Backup and restore Some features and components are identified and released as Technology Preview . Access the Red Hat Advanced Cluster Management Support matrix to learn about hub cluster and managed cluster requirements and support for each component. For lifecycle information, see Red Hat OpenShift Container Platform Life Cycle policy . 1.1.2. Installation You can enable the SiteConfig component from the MultiClusterHub custom resource that is deployed on your cluster. By default, the SiteConfig component is disabled. Learn more at MultiClusterHub advanced configuration . Learn more about SiteConfig operator at SiteConfig . Now when the MultiClusterHub resource prepares to install the multicluster engine operator, it implements CatalogSource priority as criteria. The Red Hat Advanced Cluster Management MultiClusterHub resource seeks the CatalogSource that contains the desired multicluster engine operator version that is compatible with the current Red Hat Advanced Cluster Management version. Learn more in the Catalog source priority section in Install in disconnected network environments . 1.1.3. Console Learn about what is new in the Red Hat Advanced Cluster Management integrated console. Command line interface (CLI) downloads are now available in the console, which are available from the acm-cli container image and are specified with the operating system and architecture. See Command line tools to access command line interface (CLI) downloads, such as the PolicyGenerator and policytools . View more information about your cluster when you enable the Fleet view switch. Many summary cards are redesigned, such as Cluster , Application types , Policies , and Nodes cards. Additionally, there are two new summary cards available, such as Cluster version and Worker core count . See the numerous changes to summary cards in the product console. You can now export data in a CSV file by using selecting the Export button. See Accessing your console . You can now view virtual machine resources from the console and your search results. Configure actions for the virtual machine resources. See Enabling virtual machine actions (Technology Preview) . See Search in the console for more information. 1.1.4. Clusters Important: Cluster lifecycle components and features are within the multicluster engine operator, which is a software operator that enhances cluster fleet management. Release notes for multicluster engine operator-specific features are found in at Release notes for Cluster lifecycle with multicluster engine operator . You can now enable and use SiteConfig operator as a template-driven cluster provisioning solution, which allows you to provision clusters with all available installation methods. Learn more about SiteConfig operator at SiteConfig . View other Cluster lifecycle tasks and support information at Cluster lifecycle with multicluster engine operator overview . 1.1.5. multicluster global hub You can now enable the local-cluster on your managed hub clusters by importing your managed hub cluster in hosted mode. See Importing a managed hub cluster in the hosted mode (Technology Preview) . For other multicluster global hub topics, see multicluster global hub . 1.1.6. Applications You can now use the Red Hat Advanced Cluster Management GitOpsCluster to register a non-OpenShift Container Platform cluster to a Red Hat OpenShift GitOps cluster, giving you more ways to deploy your application. For more information, see: Registering non-OpenShift Container Platform clusters to Red Hat OpenShift GitOps . For other Application topics, see Managing applications . 1.1.7. Observability For more environment stability with default settings, the default CPU request is increased to 500m and memory request is increased to 1024Mi for the thanos-compact pod. See Observability pod capacity requests for more details. To create and mount secrets to your alertmanager pods for access to arbitrary content, you can add the contents to your MultiClusterObservability resource. See Mounting secrets within the Alertmanager pods . Grafana is updated to version 11.1.5. See Using Grafana dashboards . You can now use the Advanced search option from the console by selecting the Advanced search drop-down button. Specify your query and receive results that match the exact strings that you enter and range-based search parameters. See Search customization and configurations . Technology Preview : Use the new workers parameter in the ObservabilityAddOn custom resource definition to add more worker nodes into the metric collector procress to shard federate requests made to your hub cluster. See Enabling the observability service . See Observability service introduction . 1.1.8. Governance To configure a cluster based on the available node roles, you can now use the getNodesWithExactRoles function to receive a list of nodes, and use the hasNodesWithExactRoles function to receive confirmation about clusters that contain nodes with only the roles that you specified. See Template functions for more details. You can now define additional health checks and customize status messages for your resource kinds by configuring your ArgoCD resource. See Configuring policy health checks in Red Hat OpenShift GitOps for more information. To add more clarity for compliance messages in your configuration policies, you can now customize compliance messages by using the spec.customMessage fields. See the Kubernetes configuration policy controller . You can use the .PolicyMetadata hub cluster template variable now to access the metadata of a root policy. See the Comparison of hub cluster and managed cluster templates . You can now use the hubTemplateOptions.serviceAccountName field to specify a service account to expand and control access for all hub cluster template lookups. See the Comparison of hub cluster and managed cluster templates . To specify containerArguments in the Gatekeeper operator, provide a list of argument names and values to pass to the container. See the Gatekeeper custom resource sample . The default value for spec.evaluationInterval.compliant and spec.evaluationInterval.noncompliant is watch , so now you can use Kubernetes API watches instead of polling the Kubernetes API server. See Configuration policy YAML table for more information. With the new command-line tools, you can download the PolicyGenerator to generate policies with Kustomize from Kubernetes manifests. You can also use policytools with a template-resolver subcommand to resolve templates locally. See Policy Generator to learn more about the policy generator. See Policy command line interface for more details about policytools . As you directly apply Red Hat Advanced Cluster Management policies and Gatekeeper constraints on your managed clusters, you can now view the deployment of the policies in the Discovered policies tab from the console. See Policy deployment with external tools . See Governance to learn more about the dashboard and the policy framework. 1.1.9. Backup and restore You now have a scenario where you can run a disaster recovery test. By simulating a disaster, you can practice the following actions: restoring hub cluster data on a new hub cluster, verifying that data is recovered, and returning to the initial hub cluster by using the primary hub cluster as a the active hub cluster. See Returning to the initial hub cluster after a restore . You can now use an existing hub cluster as a restore hub cluster by tagging user-created resources on the restore hub cluster with the velero.io/backup-name: backupName label. See Constraints for using an existing hub cluster as a restore hub cluster and Tagging resources . You can now customize the OADP version by setting an annotation on your MultiClusterHub resource. See Installing a custom OADP version . You can now temporarily pause the BackupSchedule resource instead of deleting it. By using the BackupSchedule paused property on the backup hub cluster, you can avoid a backup collision. See Preventing backup collisions . You can now keep the primary hub cluster active during a restore operation. See Keeping the primary hub cluster active during a restore process . With Red Hat Advanced Cluster Management restore resources, you can set more velero.io.restore spec options. See Using other restore samples . To learn about disaster recovery solutions for your hub cluster, see Backup and restore . 1.1.10. multicluster engine operator with Red Hat Advanced Cluster Management integration If you later installed Red Hat Advanced Cluster Management after using stand-alone multicluster engine operator, you get access to all Red Hat Advanced Cluster Management features. You can enable the SiteConfig component from the MultiClusterHub custom resource that is deployed on your cluster. Learn more at MultiClusterHub advanced configuration . Learn more about SiteConfig operator at SiteConfig . 1.1.11. Learn more about this release Get an overview of Red Hat Advanced Cluster Management for Kubernetes from Welcome to Red Hat Advanced Cluster Management for Kubernetes . See more release notes, such as Known Issues and Limitations in the Release notes for Red Hat Advanced Cluster Management . See the Multicluster architecture topic to learn more about major components of the product. See support information and more in the Red Hat Advanced Cluster Management Troubleshooting guide. Access the open source Open Cluster Management repository for interaction, growth, and contributions from the open community. To get involved, see open-cluster-management.io . Visit the GitHub repository for more information. 1.2. Errata updates for Red Hat Advanced Cluster Management By default, Errata updates are automatically applied when released. The details are published here when the release is available. If no release notes are listed, the product does not have an Errata release at this time. Important: For reference, Jira links and Jira numbers might be added to the content and used internally. Links that require access might not be available for the user. See Upgrading by using the operator for more information about upgrades. Important: Cluster lifecycle components and features are within the multicluster engine operator, which is a software operator that enhances cluster fleet management. Release notes for multicluster engine operator-specific features are found in at Release notes for Cluster lifecycle with multicluster engine operator . 1.2.1. Errata 2.12.2 Delivers updates to one or more product container images. Ensures that the observability operator fails to reconcile if it cannot find the Red Hat Advanced Cluster Management 2.12 version of the image from the imagestream . ( ACM-15525 ) Configures the alertmanager high availability (HA) to make Prometheus send requests to all the alertmanagers . ( ACM-16211 ) Deletes the deployment of the observability-observatorium-operator and recreates the compact statefulset . ( ACM-14867 ) Checks for the existence of image pull secrets on the serviceaccount during the reconcile loop and updates the serviceaccount by replacing it with a serviceaccount without the search-pull-secret secret. ( ACM-15056 ) Reports the status of an existing subscription for the OperatorPolicy . ( ACM-15394 ) Changes the system so that the network time protocol (NTP) server specified in the InfraEnv gets used as a fallback. ( ACM-16163 ) Prevents buttons that check user permissions in namespaces from changing between Enabled and Disabled states. ( ACM-16019 ) Detects if the MultiClusterHub made any changes that need to be applied to the container images. ( ACM-16517 ) 1.2.2. Errata 2.12.1 Delivers updates to one or more product container images. Fixes the multicluster global hub search operator so that it can enable the globalSearchFeatureFlag , even when the multicluster engine operator target namespace is not the default. ( ACM-15075 ) Fixes the OpenShift Data Foundation Operator so that it can install comma-separated values (CSVs) with the OperatorPolicy . ( ACM-14540 ) 1.3. Known issues and limitations for Red Hat Advanced Cluster Management Review the known issues for application management. The following list contains known issues for this release, or known issues that continued from the release. Important: Cluster lifecycle components and features are within the multicluster engine operator, which is a software operator that enhances cluster fleet management. Release notes for multicluster engine operator-specific features are found in at Release notes for Cluster lifecycle with multicluster engine operator . Installation known issues Business continuity known issues Console known issues Cluster management known issues Application known issues Observability known issues Governance known issues Networking known issues Global Hub known issues Important: OpenShift Container Platform release notes are not documented in this product documentation. For your OpenShift Container Platform cluster, see OpenShift Container Platform release notes . For more about deprecations and removals, see Deprecations and removals for Red Hat Advanced Cluster Management . 1.3.1. Installation known issues Review the known issues for installing and upgrading. The following list contains known issues for this release, or known issues that continued from the release. For your Red Hat OpenShift Container Platform cluster, see OpenShift Container Platform known issues . For more about deprecations and removals, see Deprecations and removals for Red Hat Advanced Cluster Management . 1.3.1.1. Uninstalling and reinstalling earlier versions with an upgrade can fail Uninstalling Red Hat Advanced Cluster Management from OpenShift Container Platform can cause issues if you later want to install earlier versions and then upgrade. For instance, when you uninstall Red Hat Advanced Cluster Management, then install an earlier version of Red Hat Advanced Cluster Management and upgrade that version, the upgrade might fail. The upgrade fails if the custom resources were not removed. Follow the Cleaning up artifacts before reinstalling procedure to prevent this problem. 1.3.1.2. Infrastructure operator error with ARM converged flow When you install the infrastructure-operator , converged flow with ARM does not work. Set ALLOW_CONVERGED_FLOW to false to resolve this issue. Run the following command to create a ConfigMap resource: Apply your file by running oc apply -f . See the following file sample with ALLOW_CONVERGED_FLOW set to false : apiVersion: v1 kind: ConfigMap metadata: name: my-assisted-service-config namespace: assisted-installer data: ALLOW_CONVERGED_FLOW: false Annotate the agentserviceconfig with the following command: The agent appears in the inventory when the issue is resolved. 1.3.2. Business continuity known issues Review the known issues for Red Hat Advanced Cluster Management for Kubernetes. The following list contains known issues for this release, or known issues that continued from the release. For your Red Hat OpenShift Container Platform cluster, see OpenShift Container Platform known issues . For more about deprecations and removals, see Deprecations and removals for Red Hat Advanced Cluster Management . 1.3.2.1. Backup and restore known issues Backup and restore known issues and limitations are listed here, along with workarounds if they are available. 1.3.2.1.1. The open-cluster-management-backup namespace is stuck in the Terminating state When the cluster-backup component is disabled on the MultiClusterHub resource, the open-cluster-management-backup namespace is stuck in the Terminating state if you have a Velero restore resource created by a Red Hat Advanced Cluster Management restore operation. The Terminating state is a result of the Velero restore resources waiting on the restores.velero.io/external-resources-finalizer to complete. To workaround this issue, complete the following steps: Delete all Red Hat Advanced Cluster Management restore resources and wait for the Velero restore to be cleaned up before you disable the cluster backup option on the MultiClusterHub resource. If your open-cluster-management-backup namespace is already stuck in the Terminating state, edit all the Velero restore resources and remove the finalizers. Allow the Velero resources to delete the namespaces and resources. 1.3.2.1.2. Bare metal hub resource no longer backed up by the managed clusters backup If the resources for the bare metal cluster are backed up and restored to a secondary hub cluster by using the Red Hat Advanced Cluster Management back up and restore feature, the managed cluster reinstalls on the nodes, which destroys the existing managed cluster. Note: This only affects bare metal clusters that were deployed by using zero touch provisioning, meaning that they have BareMetalHost resources that manage powering on and off bare metal nodes and attaching virtual media for booting. If a BareMetalHost resource was not used in the deployment of the managed cluster, there is no negative impact. To work around this issue, the BareMetalHost resources on the primary hub cluster are no longer backed up with the managed cluster backup. If you have a different use case and want the managed BareMetalHost resources on the primary hub cluster to be backed up, add the following backup label to the BareMetalHost resources on the primary hub cluster: cluster.open-cluster-management.io/backup . To learn more about using this backup label to backup generic resources, see the topic, Resources that are backed up . 1.3.2.1.3. Velero restore limitations A new hub cluster can have a different configuration than the active hub cluster if the new hub cluster, where the data is restored, has user-created resources. For example, this can include an existing policy that was created on the new hub cluster before the backup data is restored on the new hub cluster. Velero skips existing resources if they are not part of the restored backup, so the policy on the new hub cluster remains unchanged, resulting in a different configuration between the new hub cluster and active hub cluster. To address this limitation, the cluster backup and restore operator runs a post restore operation to clean up the resources created by the user or a different restore operation when a restore.cluster.open-cluster-management.io resource is created. For more information, see the Cleaning the hub cluster after restore topic. 1.3.2.1.4. Passive configurations do not display managed clusters Managed clusters are only displayed when the activation data is restored on the passive hub cluster. 1.3.2.1.5. Managed cluster resource not restored When you restore the settings for the local-cluster managed cluster resource and overwrite the local-cluster data on a new hub cluster, the settings are misconfigured. Content from the hub cluster local-cluster is not backed up because the resource contains local-cluster specific information, such as the cluster URL details. You must manually apply any configuration changes that are related to the local-cluster resource on the restored cluster. See Prepare the new hub cluster in the Installing the backup and restore operator topic. 1.3.2.1.6. Restored Hive managed clusters might not be able to connect with the new hub cluster When you restore the backup of the changed or rotated certificate of authority (CA) for the Hive managed cluster, on a new hub cluster, the managed cluster fails to connect to the new hub cluster. The connection fails because the admin kubeconfig secret for this managed cluster, available with the backup, is no longer valid. You must manually update the restored admin kubeconfig secret of the managed cluster on the new hub cluster. 1.3.2.1.7. Imported managed clusters show a Pending Import status Managed clusters that are manually imported on the primary hub cluster show a Pending Import status when the activation data is restored on the passive hub cluster. For more information, see Connecting clusters by using a Managed Service Account . 1.3.2.1.8. The appliedmanifestwork is not removed from managed clusters after restoring the hub cluster When the hub cluster data is restored on the new hub cluster, the appliedmanifestwork is not removed from managed clusters that have a placement rule for an application subscription that is not a fixed cluster set. See the following example of a placement rule for an application subscription that is not a fixed cluster set: spec: clusterReplicas: 1 clusterSelector: matchLabels: environment: dev As a result, the application is orphaned when the managed cluster is detached from the restored hub cluster. To avoid the issue, specify a fixed cluster set in the placement rule. See the following example: spec: clusterSelector: matchLabels: environment: dev You can also delete the remaining appliedmanifestwork manually by running the folowing command: 1.3.2.1.9. The appliedmanifestwork not removed and agentID is missing in the specification When you are using Red Hat Advanced Cluster Management 2.6 as your primary hub cluster, but your restore hub cluster is on version 2.7 or later, the agentID is missing in the specification of appliedmanifestworks because the field is introduced in the 2.7 release. This results in the extra appliedmanifestworks for the primary hub on the managed cluster. To avoid the issue, upgrade the primary hub cluster to Red Hat Advanced Cluster Management 2.7, then restore the backup on a new hub cluster. Fix the managed clusters by setting the spec.agentID manually for each appliedmanifestwork . Run the following command to get the agentID : Run the following command to set the spec.agentID for each appliedmanifestwork : 1.3.2.1.10. The managed-serviceaccount add-on status shows Unknown The managed cluster appliedmanifestwork addon-managed-serviceaccount-deploy is removed from the imported managed cluster if you are using the Managed Service Account without enabling it on the multicluster engine for Kubernetes operator resource of the new hub cluster. The managed cluster is still imported to the new hub cluster, but the managed-serviceaccount add-on status shows Unknown . You can recover the managed-serviceaccount add-on after enabling the Managed Service Account in the multicluster engine operator resource. See Enabling automatic import to learn how to enable the Managed Service Account. 1.3.2.2. Volsync known issues 1.3.2.2.1. Manual removal of the VolSync CSV required on managed cluster when removing the add-on When you remove the VolSync ManagedClusterAddOn from the hub cluster, it removes the VolSync operator subscription on the managed cluster but does not remove the cluster service version (CSV). To remove the CSV from the managed clusters, run the following command on each managed cluster from which you are removing VolSync: 1.3.2.2.2. Restoring the connection of a managed cluster with custom CA certificates to its restored hub cluster might fail After you restore the backup of a hub cluster that manages a cluster with custom CA certificates, the connection between the managed cluster and the hub cluster might fail. This is because the CA certificate was not backed up on the restored hub cluster. To restore the connection, copy the custom CA certificate information that is in the namespace of your managed cluster to the <managed_cluster>-admin-kubeconfig secret on the restored hub cluster. Note: If you copy this CA certificate to the hub cluster before creating the backup copy, the backup copy includes the secret information. When you use the backup copy to restore in the future, the connection between the hub cluster and managed cluster automatically completes. 1.3.3. Console known issues Review the known issues for the console. The following list contains known issues for this release, or known issues that continued from the release. For your Red Hat OpenShift Container Platform cluster, see OpenShift Container Platform known issues . For more about deprecations and removals, see Deprecations and removals for Red Hat Advanced Cluster Management . 1.3.3.1. Cannot upgrade OpenShift Dedicated in console From the console you can request an upgrade for OpenShift Dedicated clusters, but the upgrade fails with the Cannot upgrade non openshift cluster error message. Currently there is no workaround. 1.3.3.2. Search PostgreSQL pod is in CrashLoopBackoff state The search-postgres pod is in CrashLoopBackoff state. If Red Hat Advanced Cluster Management is deployed in a cluster with nodes that have the hugepages parameter enabled and the search-postgres pod gets scheduled in these nodes, then the pod does not start. Complete the following steps to increase the memory of the search-postgres pod: Pause the search-operator pod with the following command: oc annotate search search-v2-operator search-pause=true Update the search-postgres deployment with a limit for the hugepages parameter. Run the following command to set the hugepages parameter to 512Mi : oc patch deployment search-postgres --type json -p '[{"op": "add", "path": "/spec/template/spec/containers/0/resources/limits/hugepages-2Mi", "value":"512Mi"}]' Before you verify the memory usage for the pod, make sure your search-postgres pod is in the Running state. Run the following command: oc get pod <your-postgres-pod-name> -o jsonpath="Status: {.status.phase}" Run the following command to verify the memory usage of the search-postgres pod: oc get pod <your-postgres-pod-name> -o jsonpath='{.spec.containers[0].resources.limits.hugepages-2Mi}' The following value appears, 512Mi . 1.3.3.3. Cannot edit namespace bindings for cluster set When you edit namespace bindings for a cluster set with the admin role or bind role, you might encounter an error that resembles the following message: ResourceError: managedclustersetbindings.cluster.open-cluster-management.io "<cluster-set>" is forbidden: User "<user>" cannot create/delete resource "managedclustersetbindings" in API group "cluster.open-cluster-management.io" in the namespace "<namespace>". To resolve the issue, make sure you also have permission to create or delete a ManagedClusterSetBinding resource in the namespace you want to bind. The role bindings only allow you to bind the cluster set to the namespace. 1.3.3.4. Horizontal scrolling does not work after provisioning hosted control plane cluster After provisioning a hosted control plane cluster, you might not be able to scroll horizontally in the cluster overview of the Red Hat Advanced Cluster Management console if the ClusterVersionUpgradeable parameter is too long. You cannot view the hidden data as a result. To work around the issue, zoom out by using your browser zoom controls, increase your Red Hat Advanced Cluster Management console window size, or copy and paste the text to a different location. 1.3.3.5. EditApplicationSet expand feature repeats When you add multiple label expressions or attempt to enter your cluster selector for your ApplicationSet , you might receive the following message repeatedly, "Expand to enter expression". You can enter your cluster selection despite this issue. 1.3.3.6. Unable to log out from Red Hat Advanced Cluster Management When you use an external identity provider to log in to Red Hat Advanced Cluster Management, you might not be able to log out of Red Hat Advanced Cluster Management. This occurs when you use Red Hat Advanced Cluster Management, installed with IBM Cloud and Keycloak as the identity providers. You must log out of the external identity provider before you attempt to log out of Red Hat Advanced Cluster Management. 1.3.3.7. Issues with entering the cluster-ID in the OpenShift Cloud Manager console If you did not access the cluster-ID in the OpenShift Cloud Manager console, you can still get a description of your OpenShift Service on AWS cluster-ID from the terminal. You need the OpenShift Service on AWS command line interface. See Getting started with the OpenShift Service on AWS CLI documentation. To get the cluster-ID , run the following command from the OpenShift Service on AWS command line interface: rosa describe cluster --cluster=<cluster-name> | grep -o '^ID:.* 1.3.4. Cluster management known issues and limitations Review the known issues for cluster management with Red Hat Advanced Cluster Management. The following list contains known issues and limitations for this release, or known issues that continued from the release. For Cluster lifecycle with the multicluster engine for Kubernetes operator known issues, see Cluster lifecycle known issues and limitations in the multicluster engine operator documentation. 1.3.4.1. Hub cluster communication limitations The following limitations occur if the hub cluster is not able to reach or communicate with the managed cluster: You cannot create a new managed cluster by using the console. You are still able to import a managed cluster manually by using the command line interface or by using the Run import commands manually option in the console. If you deploy an Application or ApplicationSet by using the console, or if you import a managed cluster into ArgoCD, the hub cluster ArgoCD controller calls the managed cluster API server. You can use AppSub or the ArgoCD pull model to work around the issue. The console page for pod logs does not work, and an error message that resembles the following appears: 1.3.4.2. The local-cluster might not be automatically recreated If the local-cluster is deleted while disableHubSelfManagement is set to false , the local-cluster is recreated by the MulticlusterHub operator. After you detach a local-cluster, the local-cluster might not be automatically recreated. To resolve this issue, modify a resource that is watched by the MulticlusterHub operator. See the following example: To properly detach the local-cluster, set the disableHubSelfManagement to true in the MultiClusterHub . 1.3.4.3. Local-cluster status offline after reimporting with a different name When you accidentally try to reimport the cluster named local-cluster as a cluster with a different name, the status for local-cluster and for the reimported cluster display offline . To recover from this case, complete the following steps: Run the following command on the hub cluster to edit the setting for self-management of the hub cluster temporarily: Add the setting spec.disableSelfManagement=true . Run the following command on the hub cluster to delete and redeploy the local-cluster: Enter the following command to remove the local-cluster management setting: Remove spec.disableSelfManagement=true that you previously added. 1.3.4.4. Hub cluster and managed clusters clock not synced Hub cluster and manage cluster time might become out-of-sync, displaying in the console unknown and eventually available within a few minutes. Ensure that the OpenShift Container Platform hub cluster time is configured correctly. See Customizing nodes . 1.3.5. Application known issues and limitations Review the known issues for application management. The following list contains known issues for this release, or known issues that continued from the release. For your Red Hat OpenShift Container Platform cluster, see OpenShift Container Platform known issues . For more about deprecations and removals, see Deprecations and removals for Red Hat Advanced Cluster Management . See the following known issues for the Application lifecycle component. 1.3.5.1. Application topology displays invalid expression When you use the Exist or DoesNotExist operators in the Placement resource, the application topology node details display the expressions as #invalidExpr . This display is wrong, and the expression is still valid and works in the Placement resource. To workaround this issue, edit the expression inside the Placement resource YAML. 1.3.5.2. Editing subscription applications with PlacementRule does not display the subscription YAML in editor After you create a subscription application that references a PlacementRule resource, the subscription YAML does not display in the YAML editor in the console. Use your terminal to edit your subscription YAML file. 1.3.5.3. Helm Chart with secret dependencies cannot be deployed by the Red Hat Advanced Cluster Management subscription Using Helm Chart, you can define privacy data in a Kubernetes secret and refer to this secret within the value.yaml file of the Helm Chart. The username and password are given by the referred Kubernetes secret resource dbsecret . For example, see the following sample value.yaml file: credentials: secretName: dbsecret usernameSecretKey: username passwordSecretKey: password The Helm Chart with secret dependencies is only supported in the Helm binary CLI. It is not supported in the operator SDK Helm library. The Red Hat Advanced Cluster Management subscription controller applies the operator SDK Helm library to install and upgrade the Helm Chart. Therefore, the Red Hat Advanced Cluster Management subscription cannot deploy the Helm Chart with secret dependencies. 1.3.5.4. Topology does not correctly display for Argo CD pull model ApplicationSet application When you use the Argo CD pull model to deploy ApplicationSet applications and the application resource names are customized, the resource names might appear different for each cluster. When this happens, the topology does not display your application correctly. 1.3.5.5. Local cluster is excluded as a managed cluster for pull model The hub cluster application set deploys to target managed clusters, but the local cluster, which is a managed hub cluster, is excluded as a target managed cluster. As a result, if the Argo CD application is propagated to the local cluster by the Argo CD pull model, the local cluster Argo CD application is not cleaned up, even though the local cluster is removed from the placement decision of the Argo CD ApplicationSet resource. To work around the issue and clean up the local cluster Argo CD application, remove the skip-reconcile annotation from the local cluster Argo CD application. See the following annotation: annotations: argocd.argoproj.io/skip-reconcile: "true" Additionally, if you manually refresh the pull model Argo CD application in the Applications section of the Argo CD console, the refresh is not processed and the REFRESH button in the Argo CD console is disabled. To work around the issue, remove the refresh annotation from the Argo CD application. See the following annotation: annotations: argocd.argoproj.io/refresh: normal 1.3.5.6. Argo CD controller and the propagation controller might reconcile simultaneously Both the Argo CD controller and the propagation controller might reconcile on the same application resource and cause the duplicate instances of application deployment on the managed clusters, but from the different deployment models. For deploying applications by using the pull model, the Argo CD controllers ignore these application resources when the Argo CD argocd.argoproj.io/skip-reconcile annotation is added to the template section of the ApplicationSet . The argocd.argoproj.io/skip-reconcile annotation is only available in the GitOps operator version 1.9.0, or later. To prevent conflicts, wait until the hub cluster and all the managed clusters are upgraded to GitOps operator version 1.9.0 before implementing the pull model. 1.3.5.7. Resource fails to deploy All the resources listed in the MulticlusterApplicationSetReport are actually deployed on the managed clusters. If a resource fails to deploy, the resource is not included in the resource list, but the cause is listed in the error message. 1.3.5.8. Resource allocation might take several minutes For large environments with over 1000 managed clusters and Argo CD application sets that are deployed to hundreds of managed clusters, Argo CD application creation on the hub cluster might take several minutes. You can set the requeueAfterSeconds to zero in the clusterDecisionResource generator of the application set, as it is displayed in the following example file: apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: name: cm-allclusters-app-set namespace: openshift-gitops spec: generators: - clusterDecisionResource: configMapRef: ocm-placement-generator labelSelector: matchLabels: cluster.open-cluster-management.io/placement: app-placement requeueAfterSeconds: 0 1.3.5.9. Application ObjectBucket channel type cannot use allow and deny lists You cannot specify allow and deny lists with ObjectBucket channel type in the subscription-admin role. In other channel types, the allow and deny lists in the subscription indicates which Kubernetes resources can be deployed, and which Kubernetes resources should not be deployed. 1.3.5.9.1. Argo Application cannot be deployed on 3.x OpenShift Container Platform managed clusters Argo ApplicationSet from the console cannot be deployed on 3.x OpenShift Container Platform managed clusters because the Infrastructure.config.openshift.io API is not available on on 3.x. 1.3.5.10. Changes to the multicluster_operators_subscription image do not take effect automatically The application-manager add-on that is running on the managed clusters is now handled by the subscription operator, when it was previously handled by the klusterlet operator. The subscription operator is not managed the multicluster-hub , so changes to the multicluster_operators_subscription image in the multicluster-hub image manifest ConfigMap do not take effect automatically. If the image that is used by the subscription operator is overrided by changing the multicluster_operators_subscription image in the multicluster-hub image manifest ConfigMap, the application-manager add-on on the managed clusters does not use the new image until the subscription operator pod is restarted. You need to restart the pod. 1.3.5.11. Policy resource not deployed unless by subscription administrator The policy.open-cluster-management.io/v1 resources are no longer deployed by an application subscription by default for Red Hat Advanced Cluster Management version 2.4. A subscription administrator needs to deploy the application subscription to change this default behavior. See Creating an allow and deny list as subscription administrator for information. policy.open-cluster-management.io/v1 resources that were deployed by existing application subscriptions in Red Hat Advanced Cluster Management versions remain, but are no longer reconciled with the source repository unless the application subscriptions are deployed by a subscription administrator. 1.3.5.12. Application Ansible hook stand-alone mode Ansible hook stand-alone mode is not supported. To deploy Ansible hook on the hub cluster with a subscription, you might use the following subscription YAML: apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: sub-rhacm-gitops-demo namespace: hello-openshift annotations: apps.open-cluster-management.io/github-path: myapp apps.open-cluster-management.io/github-branch: master spec: hooksecretref: name: toweraccess channel: rhacm-gitops-demo/ch-rhacm-gitops-demo placement: local: true However, this configuration might never create the Ansible instance, since the spec.placement.local:true has the subscription running on standalone mode. You need to create the subscription in hub mode. Create a placement rule that deploys to local-cluster . See the following sample where local-cluster: "true" refers to your hub cluster: apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: <towhichcluster> namespace: hello-openshift spec: clusterSelector: matchLabels: local-cluster: "true" Reference that placement rule in your subscription. See the following sample: apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: sub-rhacm-gitops-demo namespace: hello-openshift annotations: apps.open-cluster-management.io/github-path: myapp apps.open-cluster-management.io/github-branch: master spec: hooksecretref: name: toweraccess channel: rhacm-gitops-demo/ch-rhacm-gitops-demo placement: placementRef: name: <towhichcluster> kind: PlacementRule After applying both, you should see the Ansible instance created in your hub cluster. 1.3.5.13. Application not deployed after an updated placement rule If applications are not deploying after an update to a placement rule, verify that the application-manager pod is running. The application-manager is the subscription container that needs to run on managed clusters. You can run oc get pods -n open-cluster-management-agent-addon |grep application-manager to verify. You can also search for kind:pod cluster:yourcluster in the console and see if the application-manager is running. If you cannot verify, attempt to import the cluster again and verify again. 1.3.5.14. Subscription operator does not create an SCC Learn about Red Hat OpenShift Container Platform SCC at Managing security context constraints , which is an additional configuration required on the managed cluster. Different deployments have different security context and different service accounts. The subscription operator cannot create an SCC CR automatically.. Administrators control permissions for pods. A Security Context Constraints (SCC) CR is required to enable appropriate permissions for the relative service accounts to create pods in the non-default namespace. To manually create an SCC CR in your namespace, complete the following steps: Find the service account that is defined in the deployments. For example, see the following nginx deployments: Create an SCC CR in your namespace to assign the required permissions to the service account or accounts. See the following example, where kind: SecurityContextConstraints is added: apiVersion: security.openshift.io/v1 defaultAddCapabilities: kind: SecurityContextConstraints metadata: name: ingress-nginx namespace: ns-sub-1 priority: null readOnlyRootFilesystem: false requiredDropCapabilities: fsGroup: type: RunAsAny runAsUser: type: RunAsAny seLinuxContext: type: RunAsAny users: - system:serviceaccount:my-operator:nginx-ingress-52edb - system:serviceaccount:my-operator:nginx-ingress-52edb-backend 1.3.5.15. Application channels require unique namespaces Creating more than one channel in the same namespace can cause errors with the hub cluster. For instance, namespace charts-v1 is used by the installer as a Helm type channel, so do not create any additional channels in charts-v1 . Ensure that you create your channel in a unique namespace. All channels need an individual namespace, except GitHub channels, which can share a namespace with another GitHub channel. 1.3.5.16. Ansible Automation Platform job fail Ansible jobs fail to run when you select an incompatible option. Ansible Automation Platform only works when the -cluster-scoped channel options are chosen. This affects all components that need to perform Ansible jobs. 1.3.5.17. Ansible Automation Platform operator access Ansible Automation Platform outside of a proxy The Red Hat Ansible Automation Platform operator cannot access Ansible Automation Platform outside of a proxy-enabled OpenShift Container Platform cluster. To resolve, you can install the Ansible Automation Platform within the proxy. See install steps that are provided by Ansible Automation Platform. 1.3.5.18. Application name requirements An application name cannot exceed 37 characters. The application deployment displays the following error if the characters exceed this amount. status: phase: PropagationFailed reason: 'Deployable.apps.open-cluster-management.io "_long_lengthy_name_" is invalid: metadata.labels: Invalid value: "_long_lengthy_name_": must be no more than 63 characters/n' 1.3.5.19. Application console table limitations See the following limitations to various Application tables in the console: From the Applications table on the Overview page and the Subscriptions table on the Advanced configuration page, the Clusters column displays a count of clusters where application resources are deployed. Since applications are defined by resources on the local cluster, the local cluster is included in the search results, whether actual application resources are deployed on the local cluster or not. From the Advanced configuration table for Subscriptions , the Applications column displays the total number of applications that use that subscription, but if the subscription deploys child applications, those are included in the search result, as well. From the Advanced configuration table for Channels , the Subscriptions column displays the total number of subscriptions on the local cluster that use that channel, but this does not include subscriptions that are deployed by other subscriptions, which are included in the search result. 1.3.5.20. No Application console topology filtering The Console and Topology for Application changes for the 2.12. There is no filtering capability from the console Topology page. 1.3.5.21. Allow and deny list does not work in Object storage applications The allow and deny list feature does not work in Object storage application subscriptions. 1.3.6. Observability known issues Review the known issues for Red Hat Advanced Cluster Management for Kubernetes. The following list contains known issues for this release, or known issues that continued from the release. For your Red Hat OpenShift Container Platform cluster, see link:https://docs.redhat.com/documentation/en-us/openshift_container_platform/4.15/html/release_notes#ocp-4-15-known-issues [OpenShift Container Platform known issues]. For more about deprecations and removals, see Deprecations and removals for Red Hat Advanced Cluster Management . 1.3.6.1. Grafana dashboard missing A Grafana dashboard might fail to load after you run the Grafana instance. Complete the following steps: To verify whether a dashboard failed to load, check the logs by running the following command: oc logs observability-grafana-68f8489659-m79rv -c grafana-dashboard-loader -n open-cluster-management-observability ... E1017 12:55:24.532493 1 dashboard_controller.go:147] dashboard: sample-dashboard could not be created after retrying 40 times To fix the dashboard failure, redeploy Grafana by scaling the number of replicas to 0 . The multicluster-observability-operator pod automatically scales the deployment to the desired number of replicas that is defined in the MultiClusterObservability resource. Run the following command: oc scale deployment observability-grafana -n open-cluster-management-observability --replicas=0 To verify that the dashboard loads correctly after redployment, run the following command to check the logs of all Grafana pods and ensure no error message appears: oc logs observability-grafana-68f8489659-h6jd9 -c grafana-dashboard-loader -n open-cluster-management-observability | grep "could not be created" 1.3.6.2. Retention change causes data loss The default retention for all resolution levels, such as retentionResolutionRaw , retentionResolution5m , or retentionResolution1h , is 365 days ( 365d ). This 365d default retention means that the default retention for a 1 hour resolution has decreased from indefinite, 0d to 365d . This retention change might cause you to lose data. If you did not set an explicit value for the resolution retention in your MultiClusterObservability spec.advanced.retentionConfig parameter, you might lose data. For more information, see Adding advanced configuration for retention . 1.3.6.3. Observatorium API gateway pods in a restored hub cluster might have stale tenant data The Observatorium API gateway pods in a restored hub cluster might contain stale tenant data after a backup and restore procedure because of a Kubernetes limitation. See Mounted ConfigMaps are updated automatically for more about the limitation. As a result, the Observatorium API and Thanos gateway rejects metrics from collectors, and the Red Hat Advanced Cluster Management Grafana dashboards do not display data. See the following errors from the Observatorium API gateway pod logs: Thanos receives pods logs with the following errors: See the following procedure to resolve this issue: Scale down the observability-observatorium-api deployment instances from N to 0 . Scale up the observability-observatorium-api deployment instances from 0 to N . Note: N = 2 by default, but might be greater than 2 in some custom configuration environments. This restarts all Observatorium API gateway pods with the correct tenant information, and the data from collectors start displaying in Grafana in between 5-10 minutes. 1.3.6.4. Permission to add PrometheusRules and ServiceMonitors in openshift-monitoring namespace denied Starting with Red Hat Advanced Cluster Management 2.9, you must use a label in your defined Red Hat Advanced Cluster Management hub cluster namespace. The label, openshift.io/cluster-monitoring: "true" causes the Cluster Monitoring Operator to scrape the namespace for metrics. When Red Hat Advanced Cluster Management 2.9 is deployed or an installation is upgraded to 2.9, the Red Hat Advanced Cluster Management Observability ServiceMonitors and PrometheusRule resources are no longer present in the openshift-monitoring namespace. 1.3.6.5. Lack of support for proxy settings The Prometheus AdditionalAlertManagerConfig resource of the observability add-on does not support proxy settings. You must disable the observability alert forwarding feature. Complete the following steps to disable alert forwarding: Go to the MultiClusterObservability resource. Update the mco-disabling-alerting parameter value to true The HTTPS proxy with a self-signed CA certificate is not supported. 1.3.6.6. Duplicate local-clusters on Service-level Overview dashboard When various hub clusters deploy Red Hat Advanced Cluster Management observability using the same S3 storage, duplicate local-clusters can be detected and displayed within the Kubernetes/Service-Level Overview/API Server dashboard. The duplicate clusters affect the results within the following panels: Top Clusters , Number of clusters that has exceeded the SLO , and Number of clusters that are meeting the SLO . The local-clusters are unique clusters associated with the shared S3 storage. To prevent multiple local-clusters from displaying within the dashboard, it is recommended for each unique hub cluster to deploy observability with a S3 bucket specifically for the hub cluster. 1.3.6.7. Observability endpoint operator fails to pull image The observability endpoint operator fails if you create a pull-secret to deploy to the MultiClusterObservability CustomResource (CR) and there is no pull-secret in the open-cluster-management-observability namespace. When you import a new cluster, or import a Hive cluster that is created with Red Hat Advanced Cluster Management, you need to manually create a pull-image secret on the managed cluster. For more information, see Enabling observability . 1.3.6.8. There is no data from ROKS clusters Red Hat Advanced Cluster Management observability does not display data from a ROKS cluster on some panels within built-in dashboards. This is because ROKS does not expose any API server metrics from servers they manage. The following Grafana dashboards contain panels that do not support ROKS clusters: Kubernetes/API server , Kubernetes/Compute Resources/Workload , Kubernetes/Compute Resources/Namespace(Workload) 1.3.6.9. There is no etcd data from ROKS clusters For ROKS clusters, Red Hat Advanced Cluster Management observability does not display data in the etcd panel of the dashboard. 1.3.6.10. Metrics are unavailable in the Grafana console Annotation query failed in the Grafana console: When you search for a specific annotation in the Grafana console, you might receive the following error message due to an expired token: "Annotation Query Failed" Refresh your browser and verify you are logged into your hub cluster. Error in rbac-query-proxy pod: Due to unauthorized access to the managedcluster resource, you might receive the following error when you query a cluster or project: no project or cluster found Check the role permissions and update appropriately. See Role-based access control for more information. 1.3.6.11. Prometheus data loss on managed clusters By default, Prometheus on OpenShift uses ephemeral storage. Prometheus loses all metrics data whenever it is restarted. When observability is enabled or disabled on OpenShift Container Platform managed clusters that are managed by Red Hat Advanced Cluster Management, the observability endpoint operator updates the cluster-monitoring-config ConfigMap by adding additional alertmanager configuration that restarts the local Prometheus automatically. 1.3.6.12. Error ingesting out-of-order samples Observability receive pods report the following error message: The error message means that the time series data sent by a managed cluster, during a metrics collection interval is older than the time series data it sent in the collection interval. When this problem happens, data is discarded by the Thanos receivers and this might create a gap in the data shown in Grafana dashboards. If the error is seen frequently, it is recommended to increase the metrics collection interval to a higher value. For example, you can increase the interval to 60 seconds. The problem is only noticed when the time series interval is set to a lower value, such as 30 seconds. Note, this problem is not seen when the metrics collection interval is set to the default value of 300 seconds. 1.3.6.13. Grafana deployment fails after upgrade If you have a grafana-dev instance deployed in earlier versions before 2.6, and you upgrade the environment to 2.6, the grafana-dev does not work. You must delete the existing grafana-dev instance by running the following command: Recreate the instance with the following command: 1.3.6.14. klusterlet-addon-search pod fails The klusterlet-addon-search pod fails because the memory limit is reached. You must update the memory request and limit by customizing the klusterlet-addon-search deployment on your managed cluster. Edit the ManagedclusterAddon custom resource named search-collector , on your hub cluster. Add the following annotations to the search-collector and update the memory, addon.open-cluster-management.io/search_memory_request=512Mi and addon.open-cluster-management.io/search_memory_limit=1024Mi . For example, if you have a managed cluster named foobar , run the following command to change the memory request to 512Mi and the memory limit to 1024Mi : 1.3.6.15. Enabling disableHubSelfManagement causes empty list in Grafana dashboard The Grafana dashboard shows an empty label list if the disableHubSelfManagement parameter is set to true in the mulitclusterengine custom resource. You must set the parameter to false or remove the parameter to see the label list. See disableHubSelfManagement for more details. 1.3.6.15.1. Endpoint URL cannot have fully qualified domain names (FQDN) When you use the FQDN or protocol for the endpoint parameter, your observability pods are not enabled. The following error message is displayed: Endpoint url cannot have fully qualified paths Enter the URL without the protocol. Your endpoint value must resemble the following URL for your secrets: endpoint: example.com:443 1.3.6.15.2. Grafana downsampled data mismatch When you attempt to query historical data and there is a discrepancy between the calculated step value and downsampled data, the result is empty. For example, if the calculated step value is 5m and the downsampled data is in a one-hour interval, data does not appear from Grafana. This discrepancy occurs because a URL query parameter must be passed through the Thanos Query front-end data source. Afterwards, the URL query can perform additional queries for other downsampling levels when data is missing. You must manually update the Thanos Query front-end data source configuration. Complete the following steps: Go to the Query front-end data source. To update your query parameters, click the Misc section. From the Custom query parameters field, select max_source_resolution=auto . To verify that the data is displayed, refresh your Grafana page. Your query data appears from the Grafana dashboard. 1.3.6.16. Metrics collector does not detect proxy configuration A proxy configuration in a managed cluster that you configure by using the addonDeploymentConfig is not detected by the metrics collector. As a workaround, you can enable the proxy by removing the managed cluster ManifestWork . Removing the ManifestWork forces the changes in the addonDeploymentConfig to be applied. 1.3.6.17. Limitations when using custom managed cluster Observatorium API or Alertmanager URLs Custom Observatorium API and Alertmanager URLs only support intermediate components with TLS passthrough. If both custom URLs are pointing to the same intermediate component, you must use separate sub-domains because OpenShift Container Platform routers do not support two separate route objects with the same host. 1.3.6.17.1. Search does not display node information from the managed cluster Search maps RBAC for resources in the hub cluster. Depending on user RBAC settings, users might not see node data from the managed cluster. Results from search might be different from what is displayed on the Nodes page for a cluster. 1.3.7. Governance known issues Review the known issues for Governance. The following list contains known issues for this release, or known issues that continued from the release. For your Red Hat OpenShift Container Platform cluster, see OpenShift Container Platform known issues . For more about deprecations and removals, see Deprecations and removals for Red Hat Advanced Cluster Management . 1.3.7.1. Configuration policy listed complaint when namespace is stuck in Terminating state When you have a configuration policy that is configured with mustnothave for the complianceType parameter and enforce for the remediationAction parameter, the policy is listed as compliant when a deletion request is made to the Kubernetes API. Therefore, the Kubernetes object can be stuck in a Terminating state while the policy is listed as compliant. 1.3.7.2. Operators deployed with policies do not support ARM While installation into an ARM environment is supported, operators that are deployed with policies might not support ARM environments. The following policies that install operators do not support ARM environments: Red Hat Advanced Cluster Management policy for the Quay Container Security Operator Red Hat Advanced Cluster Management policy for the Compliance Operator 1.3.7.3. ConfigurationPolicy custom resource definition is stuck in terminating When you remove the config-policy-controller add-on from a managed cluster by disabling the policy controller in the KlusterletAddonConfig or by detaching the cluster, the ConfigurationPolicy custom resource definition might get stuck in a terminating state. If the ConfigurationPolicy custom resource definition is stuck in a terminating state, new policies might not be added to the cluster if the add-on is reinstalled later. You can also receive the following error: Use the following command to check if the custom resource definition is stuck: If a deletion timestamp is on the resource, the custom resource definition is stuck. To resolve the issue, remove all finalizers from configuration policies that remain on the cluster. Use the following command on the managed cluster and replace <cluster-namespace> with the managed cluster namespace: The configuration policy resources are automatically removed from the cluster and the custom resource definition exits its terminating state. If the add-on has already been reinstalled, the custom resource definition is recreated automatically without a deletion timestamp. 1.3.7.4. Policy status shows repeated updates when enforced If a policy is set to remediationAction: enforce and is repeatedly updated, the Red Hat Advanced Cluster Management console shows repeated violations with successful updates. Repeated updates produce multiple policy events, which can cause the governance-policy-framework-addon pod to run out of memory and crash. See the following two possible causes and solutions for the error: Another controller or process is also updating the object with different values. To resolve the issue, disable the policy and compare the differences between objectDefinition in the policy and the object on the managed cluster. If the values are different, another controller or process might be updating them. Check the metadata of the object to help identify why the values are different. The objectDefinition in the ConfigurationPolicy does not match because of Kubernetes processing the object when the policy is applied. To resolve the issue, disable the policy and compare the differences between objectDefinition in the policy and the object on the managed cluster. If the keys are different or missing, Kubernetes might have processed the keys before applying them to the object, such as removing keys containing default or empty values. 1.3.7.5. Duplicate policy template names create inconstistent results When you create a policy with identical policy template names, you receive inconsistent results that are not detected, but you might not know the cause. For example, defining a policy with multiple configuration policies named create-pod causes inconsistent results. Best practice: Avoid using duplicate names for policy templates. 1.3.7.6. Database and policy compliance history API outage There is built-in resilience for database and policy compliance history API outages, however, any compliance events that cannot be recorded by a managed cluster are queued in memory until they are successfully recorded. This means that if there is an outage and the governance-policy-framework pod on the managed cluster restarts, all queued compliance events are lost. If you create or update a new policy during a database outage, any compliance events sent for this new policy cannot be recorded since the mapping of policies to database IDs cannot be updated. When the database is back online, the mapping is automatically updated and future compliance events from those policies are recorded. 1.3.7.7. PostgreSQL data loss If there is data loss to the PostgreSQL server such as restoring to a backup without the latest data, you must restart the governance policy propagator on the Red Hat Advanced Cluster Management hub cluster so that it can update the mapping of policies to database IDs. Until you restart the governance policy propagator, new compliance events associated with policies that once existed in the database are no longer recorded. To restart the governance policy propagator, run the following command on the Red Hat Advanced Cluster Management hub cluster: oc -n open-cluster-management rollout restart deployment/grc-policy-propagator 1.3.7.8. Kyverno policies no longer report a status for the latest version Kyverno policies generated by the Policy Generator report the following message in your Red Hat Advanced Cluster Management cluster: The cause is that the PolicyReport API version is incorrect in the generator and does not match what Kyverno has deployed. 1.3.8. Known issues for networking Review the known issues for Submariner. The following list contains known issues for this release, or known issues that continued from the release. For your Red Hat OpenShift Container Platform cluster, see OpenShift Container Platform known issues . For more about deprecations and removals, see Deprecations and removals for Red Hat Advanced Cluster Management . 1.3.8.1. Submariner known issues See the following known issues and limitations that might occur while using networking features. 1.3.8.1.1. Without ClusterManagementAddon submariner add-on fails For versions 2.8 and earlier, when you install Red Hat Advanced Cluster Management, you also deploy the submariner-addon component with the Operator Lifecycle Manager. If you did not create a MultiClusterHub custom resource, the submariner-addon pod sends an error and prevents the operator from installing. The following notification occurs because the ClusterManagementAddon custom resource definition is missing: The ClusterManagementAddon resource is created by the cluster-manager deployment, however, this deployment becomes available when the MultiClusterEngine components are installed on the cluster. If there is not a MultiClusterEngine resource that is already available on the cluster when the MultiClusterHub custom resource is created, the MultiClusterHub operator deploys the MultiClusterEngine instance and the operator that is required, which resolves the error. 1.3.8.1.2. Submariner add-on resources not cleaned up properly when managed clusters are imported If the submariner-addon component is set to false within MultiClusterHub (MCH) operator, then the submariner-addon finalizers are not cleaned up properly for the managed cluster resources. Since the finalizers are not cleaned up properly, this prevents the submariner-addon component from being disabled within the hub cluster. 1.3.8.1.3. Submariner install plan limitation The Submariner install plan does not follow the overall install plan settings. Therefore, the operator management screen cannot control the Submariner install plan. By default, Submariner install plans are applied automatically, and the Submariner addon is always updated to the latest available version corresponding to the installed Red Hat Advanced Cluster Management version. To change this behavior, you must use a customized Submariner subscription. 1.3.8.1.4. Limited headless services support Service discovery is not supported for headless services without selectors when using Globalnet. 1.3.8.1.5. Deployments that use VXLAN when NAT is enabled are not supported Only non-NAT deployments support Submariner deployments with the VXLAN cable driver. 1.3.8.1.6. Self-signed certificates might prevent connection to broker Self-signed certificates on the broker might prevent joined clusters from connecting to the broker. The connection fails with certificate validation errors. You can disable broker certificate validation by setting InsecureBrokerConnection to true in the relevant SubmarinerConfig object. See the following example: apiVersion: submarineraddon.open-cluster-management.io/v1alpha1 kind: SubmarinerConfig metadata: name: submariner namespace: <managed-cluster-namespace> spec: insecureBrokerConnection: true 1.3.8.1.7. Submariner only supports OpenShift SDN or OVN Kubernetes Submariner only supports Red Hat OpenShift Container Platform clusters that use the OpenShift SDN or the OVN-Kubernetes Container Network Interface (CNI) network provider. 1.3.8.1.8. Command limitation on Microsoft Azure clusters The subctl diagnose firewall inter-cluster command does not work on Microsoft Azure clusters. 1.3.8.1.9. Automatic upgrade not working with custom CatalogSource or Subscription Submariner is automatically upgraded when Red Hat Advanced Cluster Management for Kubernetes is upgraded. The automatic upgrade might fail if you are using a custom CatalogSource or Subscription . To make sure automatic upgrades work when installing Submariner on managed clusters, you must set the spec.subscriptionConfig.channel field to stable-0.15 in the SubmarinerConfig custom resource for each managed cluster. 1.3.8.1.10. Submariner conflicts with IPsec-enabled OVN-Kubernetes deployments IPsec tunnels that are created by IPsec-enabled OVN-Kubernetes deployments might conflict with IPsec tunnels that are created by Submariner. Do not use OVN-Kubernetes in IPsec mode with Submariner. 1.3.8.1.11. Uninstall Submariner before removing ManagedCluster from a ManageClusterSet If you remove a cluster from a ClusterSet , or move a cluster to a different ClusterSet , the Submariner installation is no longer valid. You must uninstall Submariner before moving or removing a ManagedCluster from a ManageClusterSet . If you don't uninstall Submariner, you cannot uninstall or reinstall Submariner anymore and Submariner stops working on your ManagedCluster . 1.3.9. Multicluster global hub Operator known issues Review the known issues for the multicluster global hub Operator. The following list contains known issues for this release, or known issues that continued from the release. For your OpenShift Container Platform cluster, see OpenShift Container Platform known issues . 1.3.9.1. The detached managed hub cluster deletes and recreates the namespace and resources If you import a managed hub cluster in the hosted mode and detach this managed hub cluster, then it deletes and recreates the open-cluster-management-agent-addon namespace. The detached managed hub cluster also deletes and recreates all the related addon resources within this namespace. There is currently no workaround for this issue. 1.3.9.2. Kafka operator keeps restarting In the Federal Information Processing Standard (FIPS) environment, the Kafka operator keeps restarting because of the out-of-memory (OOM) state. To fix this issue, set the resource limit to at least 512M . For detailed steps on how to set this limit, see amq stream doc . 1.3.9.3. Backup and restore known issues If your original multicluster global hub cluster crashes, the multicluster global hub loses its generated events and cron jobs. Even if you restore the new multicluster global hub cluster, the events and cron jobs are not restored. To workaround this issue, you can manually run the cron job, see Running the summarization process manually . 1.3.9.4. Managed cluster displays but is not counted A managed cluster that is not created successfully, meaning clusterclaim id.k8s.io does not exist in the managed cluster, is not counted in the policy compliance dashboards, but shows in the policy console. 1.3.9.5. The multicluster global hub is installed on OpenShift Container Platform 4.13 hyperlinks might redirect home If the multicluster global hub Operator is installed on OpenShift Container Platform 4.13, all hyperlinks that link to the managed clusters list and detail pages in dashboards might redirect to the Red Hat Advanced Cluster Management home page. You need to manually go to your target page. 1.3.9.6. The standard group filter cannot pass to the new page In the Global Hub Policy Group Compliancy Overview hub dashboards, you can check one data point by clicking View Offending Policies for standard group , but after you click this link to go to the offending page, the standard group filter cannot pass to the new page. This is also an issue for the Cluster Group Compliancy Overview . 1.4. Deprecations and removals for Red Hat Advanced Cluster Management Learn when parts of the product are deprecated or removed from Red Hat Advanced Cluster Management for Kubernetes. Consider the alternative actions in the Recommended action and details, which display in the tables for the current release and for two prior releases. Deprecated: Red Hat Advanced Cluster Management 2.8 and earlier versions are no longer supported. The documentation might remain available, but without any Errata or other updates. Best practice: Upgrade to the most recent version. Important: Cluster lifecycle components and features are within the multicluster engine operator, which is a software operator that enhances cluster fleet management. Release notes for multicluster engine operator-specific features are found in at Release notes for Cluster lifecycle with multicluster engine operator . 1.4.1. API deprecations and removals Red Hat Advanced Cluster Management follows the Kubernetes deprecation guidelines for APIs. See the Kubernetes Deprecation Policy for more details about that policy. Red Hat Advanced Cluster Management APIs are only deprecated or removed outside of the following timelines: All V1 APIs are generally available and supported for 12 months or three releases, whichever is greater. V1 APIs are not removed, but can be deprecated outside of that time limit. All beta APIs are generally available for nine months or three releases, whichever is greater. Beta APIs are not removed outside of that time limit. All alpha APIs are not required to be supported, but might be listed as deprecated or removed if it benefits users. 1.4.2. Red Hat Advanced Cluster Management deprecations A deprecated component, feature, or service is supported, but no longer recommended for use and might become obsolete in future releases. Consider the alternative actions in the Recommended action and details that are provided in the following table: Product or category Affected item Version Recommended action More details and links Overview page Red Hat Advanced Cluster Management for Kubernetes search 2.12 Enable the Fleet view switch to view the new default Overview page. The layout of the Red Hat Advanced Cluster Management Overview page is deprecated. Policy compliance history API Governance 2.12 Use the existing policy metrics to see the compliance status changes. You can also view the config-policy-controller and cert-policy-controller pod logs to get a detailed compliance history for each managed cluster. For more information, see Policy controller advanced configuration . Installer ingress.sslCiphers field in operator.open-cluster-management.io_multiclusterhubs_crd.yaml 2.9 None See Advanced Configuration for configuring install. If you uppgrade your Red Hat Advanced Cluster Management for Kubernetes version and originally had a MultiClusterHub custom resource with the spec.ingress.sslCiphers field defined, the field is still recognized, but is deprecated and has no effect. Applications and Governance PlacementRule 2.8 Use Placement anywhere that you might use PlacementRule . While PlacementRule is still available, it is not supported and the console displays Placement by default. 1.4.3. Removals A removed item is typically function that was deprecated in releases and is no longer available in the product. You must use alternatives for the removed function. Consider the alternative actions in the Recommended action and details that are provided in the following table: Product or category Affected item Version Recommended action More details and links Governance IAM policy controller 2.11 None 1.5. Red Hat Advanced Cluster Management platform considerations for GDPR readiness 1.5.1. Notice This document is intended to help you in your preparations for General Data Protection Regulation (GDPR) readiness. It provides information about features of the Red Hat Advanced Cluster Management for Kubernetes platform that you can configure, and aspects of the product's use, that you should consider to help your organization with GDPR readiness. This information is not an exhaustive list, due to the many ways that clients can choose and configure features, and the large variety of ways that the product can be used in itself and with third-party clusters and systems. *Clients are responsible for ensuring their own compliance with various laws and regulations, including the European Union General Data Protection Regulation. Clients are solely responsible for obtaining advice of competent legal counsel as to the identification and interpretation of any relevant laws and regulations that may affect the clients' business and any actions the clients may need to take to comply with such laws and regulations.* The products, services, and other capabilities described herein are not suitable for all client situations and may have restricted availability. Red Hat does not provide legal, accounting, or auditing advice or represent or warrant that its services or products will ensure that clients are in compliance with any law or regulation. 1.5.2. GDPR Product Configuration for GDPR Data Life Cycle Data Collection Data Storage Data Access Data Processing Data Deletion Capability for Restricting Use of Personal Data Appendix 1.5.3. GDPR General Data Protection Regulation (GDPR) has been adopted by the European Union (EU) and applies from May 25, 2018. 1.5.3.1. Why is GDPR important? GDPR establishes a stronger data protection regulatory framework for processing personal data of individuals. GDPR brings: New and enhanced rights for individuals Widened definition of personal data New obligations for processors Potential for significant financial penalties for non-compliance Compulsory data breach notification 1.5.3.2. Read more about GDPR EU GDPR Information Portal Red Hat GDPR website 1.5.4. Product Configuration for GDPR The following sections describe aspects of data management within the Red Hat Advanced Cluster Management for Kubernetes platform and provide information on capabilities to help clients with GDPR requirements. 1.5.5. Data Life Cycle Red Hat Advanced Cluster Management for Kubernetes is an application platform for developing and managing on-premises, containerized applications. It is an integrated environment for managing containers that includes the container orchestrator Kubernetes, cluster lifecycle, application lifecycle, and security frameworks (governance, risk, and compliance). As such, the Red Hat Advanced Cluster Management for Kubernetes platform deals primarily with technical data that is related to the configuration and management of the platform, some of which might be subject to GDPR. The Red Hat Advanced Cluster Management for Kubernetes platform also deals with information about users who manage the platform. This data will be described throughout this document for the awareness of clients responsible for meeting GDPR requirements. This data is persisted on the platform on local or remote file systems as configuration files or in databases. Applications that are developed to run on the Red Hat Advanced Cluster Management for Kubernetes platform might deal with other forms of personal data subject to GDPR. The mechanisms that are used to protect and manage platform data are also available to applications that run on the platform. Additional mechanisms might be required to manage and protect personal data that is collected by applications run on the Red Hat Advanced Cluster Management for Kubernetes platform. To best understand the Red Hat Advanced Cluster Management for Kubernetes platform and its data flows, you must understand how Kubernetes, Docker, and the Operator work. These open source components are fundamental to the Red Hat Advanced Cluster Management for Kubernetes platform. You use Kubernetes deployments to place instances of applications, which are built into Operators that reference Docker images. The Operator contain the details about your application, and the Docker images contain all the software packages that your applications need to run. 1.5.5.1. What types of data flow through Red Hat Advanced Cluster Management for Kubernetes platform As a platform, Red Hat Advanced Cluster Management for Kubernetes deals with several categories of technical data that could be considered as personal data, such as an administrator user ID and password, service user IDs and passwords, IP addresses, and Kubernetes node names. The Red Hat Advanced Cluster Management for Kubernetes platform also deals with information about users who manage the platform. Applications that run on the platform might introduce other categories of personal data unknown to the platform. Information on how this technical data is collected/created, stored, accessed, secured, logged, and deleted is described in later sections of this document. 1.5.5.2. Personal data used for online contact Customers can submit online comments, feedback, and requests for information about in a variety of ways, primarily: The public Slack community if there is a Slack channel The public comments or tickets on the product documentation The public conversations in a technical community Typically, only the client name and email address are used, to enable personal replies for the subject of the contact, and the use of personal data conforms to the Red Hat Online Privacy Statement . 1.5.6. Data Collection The Red Hat Advanced Cluster Management for Kubernetes platform does not collect sensitive personal data. It does create and manage technical data, such as an administrator user ID and password, service user IDs and passwords, IP addresses, and Kubernetes node names, which might be considered personal data. The Red Hat Advanced Cluster Management for Kubernetes platform also deals with information about users who manage the platform. All such information is only accessible by the system administrator through a management console with role-based access control or by the system administrator though login to a Red Hat Advanced Cluster Management for Kubernetes platform node. Applications that run on the Red Hat Advanced Cluster Management for Kubernetes platform might collect personal data. When you assess the use of the Red Hat Advanced Cluster Management for Kubernetes platform running containerized applications and your need to meet the requirements of GDPR, you must consider the types of personal data that are collected by the application and aspects of how that data is managed, such as: How is the data protected as it flows to and from the application? Is the data encrypted in transit? How is the data stored by the application? Is the data encrypted at rest? How are credentials that are used to access the application collected and stored? How are credentials that are used by the application to access data sources collected and stored? How is data collected by the application removed as needed? This is not a definitive list of the types of data that are collected by the Red Hat Advanced Cluster Management for Kubernetes platform. It is provided as an example for consideration. If you have any questions about the types of data, contact Red Hat. 1.5.7. Data storage The Red Hat Advanced Cluster Management for Kubernetes platform persists technical data that is related to configuration and management of the platform in stateful stores on local or remote file systems as configuration files or in databases. Consideration must be given to securing all data at rest. The Red Hat Advanced Cluster Management for Kubernetes platform supports encryption of data at rest in stateful stores that use dm-crypt . The following items highlight the areas where data is stored, which you might want to consider for GDPR. Platform Configuration Data: The Red Hat Advanced Cluster Management for Kubernetes platform configuration can be customized by updating a configuration YAML file with properties for general settings, Kubernetes, logs, network, Docker, and other settings. This data is used as input to the Red Hat Advanced Cluster Management for Kubernetes platform installer for deploying one or more nodes. The properties also include an administrator user ID and password that are used for bootstrap. Kubernetes Configuration Data: Kubernetes cluster state data is stored in a distributed key-value store, etcd . User Authentication Data, including User IDs and passwords: User ID and password management are handled through a client enterprise LDAP directory. Users and groups that are defined in LDAP can be added to Red Hat Advanced Cluster Management for Kubernetes platform teams and assigned access roles. Red Hat Advanced Cluster Management for Kubernetes platform stores the email address and user ID from LDAP, but does not store the password. Red Hat Advanced Cluster Management for Kubernetes platform stores the group name and upon login, caches the available groups to which a user belongs. Group membership is not persisted in any long-term way. Securing user and group data at rest in the enterprise LDAP must be considered. Red Hat Advanced Cluster Management for Kubernetes platform also includes an authentication service, Open ID Connect (OIDC) that interacts with the enterprise directory and maintains access tokens. This service uses ETCD as a backing store. Service authentication data, including user IDs and passwords: Credentials that are used by Red Hat Advanced Cluster Management for Kubernetes platform components for inter-component access are defined as Kubernetes Secrets. All Kubernetes resource definitions are persisted in the etcd key-value data store. Initial credentials values are defined in the platform configuration data as Kubernetes Secret configuration YAML files. For more information, see Secrets in the Kubernetes documentation. 1.5.8. Data access Red Hat Advanced Cluster Management for Kubernetes platform data can be accessed through the following defined set of product interfaces. Web user interface (the console) Kubernetes kubectl CLI Red Hat Advanced Cluster Management for Kubernetes CLI oc CLI These interfaces are designed to allow you to make administrative changes to your Red Hat Advanced Cluster Management for Kubernetes cluster. Administration access to Red Hat Advanced Cluster Management for Kubernetes can be secured and involves three logical, ordered stages when a request is made: authentication, role-mapping, and authorization. 1.5.8.1. Authentication The Red Hat Advanced Cluster Management for Kubernetes platform authentication manager accepts user credentials from the console and forwards the credentials to the backend OIDC provider, which validates the user credentials against the enterprise directory. The OIDC provider then returns an authentication cookie ( auth-cookie ) with the content of a JSON Web Token ( JWT ) to the authentication manager. The JWT token persists information such as the user ID and email address, in addition to group membership at the time of the authentication request. This authentication cookie is then sent back to the console. The cookie is refreshed during the session. It is valid for 12 hours after you sign out of the console or close your web browser. For all subsequent authentication requests made from the console, the front-end NGINX server decodes the available authentication cookie in the request and validates the request by calling the authentication manager. The Red Hat Advanced Cluster Management for Kubernetes platform CLI requires the user to provide credentials to log in. The kubectl and oc CLI also requires credentials to access the cluster. These credentials can be obtained from the management console and expire after 12 hours. Access through service accounts is supported. 1.5.8.2. Role Mapping Red Hat Advanced Cluster Management for Kubernetes platform supports role-based access control (RBAC). In the role mapping stage, the user name that is provided in the authentication stage is mapped to a user or group role. The roles are used when authorizing which administrative activities can be carried out by the authenticated user. 1.5.8.3. Authorization Red Hat Advanced Cluster Management for Kubernetes platform roles control access to cluster configuration actions, to catalog and Helm resources, and to Kubernetes resources. Several IAM (Identity and Access Management) roles are provided, including Cluster Administrator, Administrator, Operator, Editor, Viewer. A role is assigned to users or user groups when you add them to a team. Team access to resources can be controlled by namespace. 1.5.8.4. Pod Security Pod security policies are used to set up cluster-level control over what a pod can do or what it can access. 1.5.9. Data Processing Users of Red Hat Advanced Cluster Management for Kubernetes can control the way that technical data that is related to configuration and management is processed and secured through system configuration. Role-based access control (RBAC) controls what data and functions can be accessed by users. Data-in-transit is protected by using TLS . HTTPS ( TLS underlying) is used for secure data transfer between user client and back end services. Users can specify the root certificate to use during installation. Data-at-rest protection is supported by using dm-crypt to encrypt data. These same platform mechanisms that are used to manage and secure Red Hat Advanced Cluster Management for Kubernetes platform technical data can be used to manage and secure personal data for user-developed or user-provided applications. Clients can develop their own capabilities to implement further controls. 1.5.10. Data Deletion Red Hat Advanced Cluster Management for Kubernetes platform provides commands, application programming interfaces (APIs), and user interface actions to delete data that is created or collected by the product. These functions enable users to delete technical data, such as service user IDs and passwords, IP addresses, Kubernetes node names, or any other platform configuration data, as well as information about users who manage the platform. Areas of Red Hat Advanced Cluster Management for Kubernetes platform to consider for support of data deletion: All technical data that is related to platform configuration can be deleted through the management console or the Kubernetes kubectl API. Areas of Red Hat Advanced Cluster Management for Kubernetes platform to consider for support of account data deletion: All technical data that is related to platform configuration can be deleted through the Red Hat Advanced Cluster Management for Kubernetes or the Kubernetes kubectl API. Function to remove user ID and password data that is managed through an enterprise LDAP directory would be provided by the LDAP product used with Red Hat Advanced Cluster Management for Kubernetes platform. 1.5.11. Capability for Restricting Use of Personal Data Using the facilities summarized in this document, Red Hat Advanced Cluster Management for Kubernetes platform enables an end user to restrict usage of any technical data within the platform that is considered personal data. Under GDPR, users have rights to access, modify, and restrict processing. Refer to other sections of this document to control the following: Right to access Red Hat Advanced Cluster Management for Kubernetes platform administrators can use Red Hat Advanced Cluster Management for Kubernetes platform features to provide individuals access to their data. Red Hat Advanced Cluster Management for Kubernetes platform administrators can use Red Hat Advanced Cluster Management for Kubernetes platform features to provide individuals information about what data Red Hat Advanced Cluster Management for Kubernetes platform holds about the individual. Right to modify Red Hat Advanced Cluster Management for Kubernetes platform administrators can use Red Hat Advanced Cluster Management for Kubernetes platform features to allow an individual to modify or correct their data. Red Hat Advanced Cluster Management for Kubernetes platform administrators can use Red Hat Advanced Cluster Management for Kubernetes platform features to correct an individual's data for them. Right to restrict processing Red Hat Advanced Cluster Management for Kubernetes platform administrators can use Red Hat Advanced Cluster Management for Kubernetes platform features to stop processing an individual's data. 1.5.12. Appendix As a platform, Red Hat Advanced Cluster Management for Kubernetes deals with several categories of technical data that could be considered as personal data, such as an administrator user ID and password, service user IDs and passwords, IP addresses, and Kubernetes node names. Red Hat Advanced Cluster Management for Kubernetes platform also deals with information about users who manage the platform. Applications that run on the platform might introduce other categories of personal data that are unknown to the platform. This appendix includes details on data that is logged by the platform services. 1.6. FIPS readiness Red Hat Advanced Cluster Management for Kubernetes is designed for FIPS. When running on Red Hat OpenShift Container Platform in FIPS mode, OpenShift Container Platform uses the Red Hat Enterprise Linux cryptographic libraries submitted to NIST for FIPS Validation on only the architectures that are supported by OpenShift Container Platform. For more information about the NIST validation program, see Cryptographic Module Validation Program . For the latest NIST status for the individual versions of the RHEL cryptographic libraries submitted for validation, see Compliance Activities and Government Standards . If you plan to manage clusters with FIPS enabled, you must install Red Hat Advanced Cluster Management on an OpenShift Container Platform cluster configured to operate in FIPS mode. The hub cluster must be in FIPS mode because cryptography that is created on the hub cluster is used on managed clusters. To enable FIPS mode on your managed clusters, set fips: true when you provision your OpenShift Container Platform managed cluster. You cannot enable FIPS after you provision your cluster. For more information, see Do you need extra security for your cluster? in the OpenShift Container Platform documentation. 1.6.1. Limitations Read the following limitations with Red Hat Advanced Cluster Management and FIPS. Persistent Volume Claim (PVC) and S3 storage that is used by the search and observability components must be encrypted when you configure the provided storage. Red Hat Advanced Cluster Management does not provide storage encryption, see the OpenShift Container Platform documentation, Configuring persistent storage . When you provision managed clusters using the Red Hat Advanced Cluster Management console, select the following checkbox in the Cluster details section of the managed cluster creation to enable the FIPS standards: 1.7. Observability support Red Hat Advanced Cluster Management is tested with and fully supported by Red Hat OpenShift Data Foundation, formerly Red Hat OpenShift Container Platform. Red Hat Advanced Cluster Management supports the function of the multicluster observability operator on user-provided third-party object storage that is S3 API compatible. The observability service uses Thanos supported, stable object stores. Red Hat Advanced Cluster Management support efforts include reasonable efforts to identify root causes. If you open a support ticket and the root cause is the S3 compatible object storage that you provided, then you must open an issue using the customer support channels.
[ "create -f", "apiVersion: v1 kind: ConfigMap metadata: name: my-assisted-service-config namespace: assisted-installer data: ALLOW_CONVERGED_FLOW: false", "annotate --overwrite AgentServiceConfig agent unsupported.agent-install.openshift.io/assisted-service-configmap=my-assisted-service-config", "spec: clusterReplicas: 1 clusterSelector: matchLabels: environment: dev", "spec: clusterSelector: matchLabels: environment: dev", "delete appliedmanifestwork <the-left-appliedmanifestwork-name>", "get klusterlet klusterlet -o jsonpath='{.metadata.uid}'", "patch appliedmanifestwork <appliedmanifestwork_name> --type=merge -p '{\"spec\":{\"agentID\": \"'USDAGENT_ID'\"}}'", "delete csv -n openshift-operators volsync-product.v0.6.0", "annotate search search-v2-operator search-pause=true", "patch deployment search-postgres --type json -p '[{\"op\": \"add\", \"path\": \"/spec/template/spec/containers/0/resources/limits/hugepages-2Mi\", \"value\":\"512Mi\"}]'", "get pod <your-postgres-pod-name> -o jsonpath=\"Status: {.status.phase}\"", "get pod <your-postgres-pod-name> -o jsonpath='{.spec.containers[0].resources.limits.hugepages-2Mi}'", "rosa describe cluster --cluster=<cluster-name> | grep -o '^ID:.*", "Error querying resource logs: Service unavailable", "delete deployment multiclusterhub-repo -n <namespace>", "edit mch -n open-cluster-management multiclusterhub", "delete managedcluster local-cluster", "edit mch -n open-cluster-management multiclusterhub", "credentials: secretName: dbsecret usernameSecretKey: username passwordSecretKey: password", "annotations: argocd.argoproj.io/skip-reconcile: \"true\"", "annotations: argocd.argoproj.io/refresh: normal", "apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: name: cm-allclusters-app-set namespace: openshift-gitops spec: generators: - clusterDecisionResource: configMapRef: ocm-placement-generator labelSelector: matchLabels: cluster.open-cluster-management.io/placement: app-placement requeueAfterSeconds: 0", "apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: sub-rhacm-gitops-demo namespace: hello-openshift annotations: apps.open-cluster-management.io/github-path: myapp apps.open-cluster-management.io/github-branch: master spec: hooksecretref: name: toweraccess channel: rhacm-gitops-demo/ch-rhacm-gitops-demo placement: local: true", "apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: <towhichcluster> namespace: hello-openshift spec: clusterSelector: matchLabels: local-cluster: \"true\"", "apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: sub-rhacm-gitops-demo namespace: hello-openshift annotations: apps.open-cluster-management.io/github-path: myapp apps.open-cluster-management.io/github-branch: master spec: hooksecretref: name: toweraccess channel: rhacm-gitops-demo/ch-rhacm-gitops-demo placement: placementRef: name: <towhichcluster> kind: PlacementRule", "nginx-ingress-52edb nginx-ingress-52edb-backend", "apiVersion: security.openshift.io/v1 defaultAddCapabilities: kind: SecurityContextConstraints metadata: name: ingress-nginx namespace: ns-sub-1 priority: null readOnlyRootFilesystem: false requiredDropCapabilities: fsGroup: type: RunAsAny runAsUser: type: RunAsAny seLinuxContext: type: RunAsAny users: - system:serviceaccount:my-operator:nginx-ingress-52edb - system:serviceaccount:my-operator:nginx-ingress-52edb-backend", "status: phase: PropagationFailed reason: 'Deployable.apps.open-cluster-management.io \"_long_lengthy_name_\" is invalid: metadata.labels: Invalid value: \"_long_lengthy_name_\": must be no more than 63 characters/n'", "logs observability-grafana-68f8489659-m79rv -c grafana-dashboard-loader -n open-cluster-management-observability E1017 12:55:24.532493 1 dashboard_controller.go:147] dashboard: sample-dashboard could not be created after retrying 40 times", "scale deployment observability-grafana -n open-cluster-management-observability --replicas=0", "logs observability-grafana-68f8489659-h6jd9 -c grafana-dashboard-loader -n open-cluster-management-observability | grep \"could not be created\"", "level=error name=observatorium caller=logchannel.go:129 msg=\"failed to forward metrics\" returncode=\"500 Internal Server Error\" response=\"no matching hashring to handle tenant\\n\"", "caller=handler.go:551 level=error component=receive component=receive-handler tenant=xxxx err=\"no matching hashring to handle tenant\" msg=\"internal server error\"", "Error on ingesting out-of-order samples", "./setup-grafana-dev.sh --clean", "./setup-grafana-dev.sh --deploy", "annotate managedclusteraddon search-collector -n foobar addon.open-cluster-management.io/search_memory_request=512Mi addon.open-cluster-management.io/search_memory_limit=1024Mi", "Endpoint url cannot have fully qualified paths", "endpoint: example.com:443", "template-error; Failed to create policy template: create not allowed while custom resource definition is terminating", "get crd configurationpolicies.policy.open-cluster-management.io -o=jsonpath='{.metadata.deletionTimestamp}'", "get configurationpolicy -n <cluster-namespace> -o name | xargs oc patch -n <cluster-namespace> --type=merge -p '{\"metadata\":{\"finalizers\": []}}'", "-n open-cluster-management rollout restart deployment/grc-policy-propagator", "violation - couldn't find mapping resource with kind ClusterPolicyReport, please check if you have CRD deployed; violation - couldn't find mapping resource with kind PolicyReport, please check if you have CRD deployed", "graceful termination failed, controllers failed with error: the server could not find the requested resource (post clustermanagementaddons.addon.open-cluster-management.io)", "apiVersion: submarineraddon.open-cluster-management.io/v1alpha1 kind: SubmarinerConfig metadata: name: submariner namespace: <managed-cluster-namespace> spec: insecureBrokerConnection: true", "FIPS with information text: Use the Federal Information Processing Standards (FIPS) modules provided with Red Hat Enterprise Linux CoreOS instead of the default Kubernetes cryptography suite file before you deploy the new managed cluster." ]
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.12/html/release_notes/acm-release-notes
23.17. Package Group Selection
23.17. Package Group Selection Now that you have made most of the choices for your installation, you are ready to confirm the default package selection or customize packages for your system. The Package Installation Defaults screen appears and details the default package set for your Red Hat Enterprise Linux installation. This screen varies depending on the version of Red Hat Enterprise Linux you are installing. Important If you install Red Hat Enterprise Linux in text mode, you cannot make package selections. The installer automatically selects packages only from the base and core groups. These packages are sufficient to ensure that the system is operational at the end of the installation process, ready to install updates and new packages. To change the package selection, complete the installation, then use the Add/Remove Software application to make desired changes. Figure 23.46. Package Group Selection By default, the Red Hat Enterprise Linux installation process loads a selection of software that is suitable for a system deployed as a basic server. Note that this installation does not include a graphical environment. To include a selection of software suitable for other roles, click the radio button that corresponds to one of the following options: Basic Server This option provides a basic installation of Red Hat Enterprise Linux for use on a server. Database Server This option provides the MySQL and PostgreSQL databases. Web server This option provides the Apache web server. Enterprise Identity Server Base This option provides OpenLDAP and Enterprise Identity Management (IPA) to create an identity and authentication server. Virtual Host This option provides the KVM and Virtual Machine Manager tools to create a host for virtual machines. Desktop This option provides the OpenOffice.org productivity suite, graphical tools such as the GIMP , and multimedia applications. Software Development Workstation This option provides the necessary tools to compile software on your Red Hat Enterprise Linux system. Minimal This option provides only the packages essential to run Red Hat Enterprise Linux. A minimal installation provides the basis for a single-purpose server or desktop appliance and maximizes performance and security on such an installation. Warning Minimal installation currently does not configure the firewall ( iptables / ip6tables ) by default because the authconfig and system-config-firewall-base packages are missing from the selection. To work around this issue, you can use a Kickstart file to add these packages to your selection. See the Red Hat Customer Portal for details about the workaround, and Chapter 32, Kickstart Installations for information about Kickstart files. If you do not use the workaround, the installation will complete successfully, but no firewall will be configured, presenting a security risk. If you choose to accept the current package list, skip ahead to Section 23.18, "Installing Packages" . To select a component, click on the checkbox beside it (refer to Figure 23.46, "Package Group Selection" ). To customize your package set further, select the Customize now option on the screen. Clicking takes you to the Package Group Selection screen. 23.17.1. Installing from Additional Repositories You can define additional repositories to increase the software available to your system during installation. A repository is a network location that stores software packages along with metadata that describes them. Many of the software packages used in Red Hat Enterprise Linux require other software to be installed. The installer uses the metadata to ensure that these requirements are met for every piece of software you select for installation. The Red Hat Enterprise Linux repository is automatically selected for you. It contains the complete collection of software that was released as Red Hat Enterprise Linux 6.9, with the various pieces of software in their versions that were current at the time of release. Figure 23.47. Adding a software repository To include software from extra repositories , select Add additional software repositories and provide the location of the repository. To edit an existing software repository location, select the repository in the list and then select Modify repository . If you change the repository information during a non-network installation, such as from a Red Hat Enterprise Linux DVD, the installer prompts you for network configuration information. Figure 23.48. Select network interface Select an interface from the drop-down menu. Click OK . Anaconda then starts NetworkManager to allow you to configure the interface. Figure 23.49. Network Connections For details of how to use NetworkManager , refer to Section 23.7, "Setting the Hostname" If you select Add additional software repositories , the Edit repository dialog appears. Provide a Repository name and the Repository URL for its location. Once you have located a mirror, to determine the URL to use, find the directory on the mirror that contains a directory named repodata . Once you provide information for an additional repository, the installer reads the package metadata over the network. Software that is specially marked is then included in the package group selection system. Warning If you choose Back from the package selection screen, any extra repository data you may have entered is lost. This allows you to effectively cancel extra repositories. Currently there is no way to cancel only a single repository once entered.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/s1-pkgselection-s390
Chapter 7. Configuring memory usage for addresses
Chapter 7. Configuring memory usage for addresses AMQ Broker transparently supports huge queues containing millions of messages, even if the machine that is hosting the broker is running with limited memory. In these situations, it might be not possible to store all of the queues in memory at any one time. To protect against excess memory consumption, you can configure the maximum memory usage that is allowed for each address on the broker. In addition, you can configure the broker to take one of the following actions when memory usage for an address reaches the configured limit: Page messages Silently drop messages Drop messages and notify the sending clients Block clients from sending messages If you configure the broker to page messages when the maximum memory usage for an address is reached, you can configure limits for specific addresses to: Limit the disk space used to page incoming messages Limit the memory used for paged messages that the broker transfers from disk back to memory when clients are ready to consume messages. You can also set a disk usage threshold, which overrides all the configured paging limits. If the disk usage threshold is reached, the broker stops paging and blocks all incoming messages. Important When you use transactions, the broker might allocate extra memory to ensure transactional consistency. In this case, the memory usage reported by the broker might not reflect the total number of bytes being used in memory. Therefore, if you configure the broker to page, drop, or block messages based on a specified maximum memory usage, you should not also use transactions. 7.1. Configuring message paging For any address that has a maximum memory usage limit specified, you can also specify what action the broker takes when that usage limit is reached. One of the options that you can configure is paging . If you configure the paging option, when the maximum size of an address is reached, the broker starts to store messages for that address on disk, in files known as page files . Each page file has a maximum size that you can configure. Each address that you configure in this way has a dedicated folder in your file system to store paged messages. Both queue browsers and consumers can navigate through page files when inspecting messages in a queue. However, a consumer that is using a very specific filter might not be able to consume a message that is stored in a page file until existing messages in the queue have been consumed first. For example, suppose that a consumer filter includes a string expression such as "color='red'" . If a message that meets this condition follows one million messages with the property "color='blue'" , the consumer cannot consume the message until those with "color='blue'" have been consumed first. The broker transfers (that is, depages ) messages from disk into memory when clients are ready to consume them. The broker removes a page file from disk when all messages in that file have been acknowledged. Important AMQ Broker orders pending messages in a queue by JMS message priority. However, messages that are paged are not ordered by priority. If you want to preserve the ordering, do not configure paging and size the broker sufficiently to keep all messages in memory. 7.1.1. Specifying a paging directory The following procedure shows how to specify the location of the paging directory. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Within the core element, add the paging-directory element. Specify a location for the paging directory in your file system. <configuration ...> <core ...> ... <paging-directory> /path/to/paging-directory </paging-directory> ... </core> </configuration> For each address that you subsequently configure for paging, the broker adds a dedicated directory within the paging directory that you have specified. 7.1.2. Configuring an address for paging The following procedure shows how to configure an address for paging. Prerequisites You should be familiar with how to configure addresses and address settings. For more information, see Chapter 4, Configuring addresses and queues . Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. For an address-setting element that you have configured for a matching address or set of addresses, add configuration elements to specify maximum memory usage and define paging behavior. For example: <address-settings> <address-setting match="my.paged.address"> ... <max-size-bytes>104857600</max-size-bytes> <max-size-messages>20000</max-size-messages> <page-size-bytes>10485760</page-size-bytes> <address-full-policy>PAGE</address-full-policy> ... </address-setting> </address-settings> max-size-bytes Maximum size, in bytes, of the memory allowed for the address before the broker executes the action specified for the address-full-policy attribute. The default value is -1 , which means that there is no limit. The value that you specify also supports byte notation such as "K", "MB", and "GB". max-size-messages Maximum number of messages allowed for the address before the broker executes the action specified for the address-full-policy attribute. The default value is -1, which means that there is no message limit. page-size-bytes Size, in bytes, of each page file used on the paging system. The default value is 10485760 (that is, 10 MiB). The value that you specify also supports byte notation such as "K", "MB", and "GB". address-full-policy Action that the broker takes when the maximum size for an address has been reached. The default value is PAGE . Valid values are: PAGE The broker pages any further messages to disk. DROP The broker silently drops any further messages. FAIL The broker drops any further messages and issues exceptions to client message producers. BLOCK Client message producers block when they try to send further messages. If you set limits for the max-size-bytes and max-size-message attributes, the broker executes the action specified for the address-full-policy attribute when either limit is reached. With the configuration in the example, the broker starts paging messages for the my.paged.address address when the total messages for the address in memory exceeds 20,000 or uses 104857600 bytes of available memory. Additional paging configuration elements that are not shown in the preceding example are described below. page-sync-timeout Time, in nanoseconds, between periodic page synchronizations. If you are using an asynchronous IO journal (that is, journal-type is set to ASYNCIO in the broker.xml configuration file), the default value is 3333333 . If you are using a standard Java NIO journal (that is, journal-type is set to NIO ), the default value is the configured value of the journal-buffer-timeout parameter. In the preceding example , when messages sent to the address my.paged.address exceed 104857600 bytes in memory, the broker begins paging. Note If you specify max-size-bytes in an address-setting element, the value applies to each matching address. Specifying this value does not mean that the total size of all matching addresses is limited to the value of max-size-bytes . 7.1.3. Configuring a global paging size Sometimes, configuring a memory limit per address is not practical, for example, when a broker manages many addresses that have different usage patterns. In these situations, you can specify a global memory limit. The global limit is the total amount of memory that the broker can use for all addresses. When this memory limit is reached, the broker executes the action specified for the address-full-policy attribute for the address associated with each new incoming message. The following procedure shows how to configure a global paging size. Prerequisites You should be familiar with how to configure an address for paging. For more information, see Section 7.1.2, "Configuring an address for paging" . Procedure Stop the broker. On Linux: On Windows: Open the <broker_instance_dir> /etc/broker.xml configuration file. Within the core element, add the global-max-size element and specify a value. For example: <configuration> <core> ... <global-max-size>1GB</global-max-size> <global-max-messages>900000</global-max-messages> ... </core> </configuration> global-max-size Total amount of memory, in bytes, that the broker can use for all addresses. When this limit is reached, the broker executes the action specified for the address-full-policy attribute for the address associated with each incoming message. The default value of global-max-size is half of the maximum memory available to the Java virtual machine (JVM) that is hosting the broker. The value for global-max-size is in bytes, but also supports byte notation (for example, "K", "Mb", "GB"). In the preceding example, the broker is configured to use a maximum of one gigabyte of available memory when processing messages. global-max-messages The total number of messages allowed for all addresses. When this limit is reached, the broker executes the action specified for the address-full-policy attribute for the address associated with each incoming message. The default value is -1, which means that there is no message limit. If you set limits for the global-max-size and global-max-messages attributes, the broker executes the action specified for the address-full-policy attribute when either limit is reached. With the configuration in the example, the broker starts paging messages for all addresses when the number of messages in memory exceeds 900,000 or uses 1 GB of available memory. Note If limits that are set for an individual address, by using the max-size-bytes or max-size-message attributes, are reached before the limits set for the global-max-size or global-max-messages attributes, the broker executes the action specified for the address-full-policy attribute for that address. Start the broker. On Linux: On Windows: 7.1.4. Limiting disk usage during paging for specific addresses You can limit the amount of disk space that the broker can use before it stops paging incoming messages for an individual address or set of addresses. Procedure Stop the broker. On Linux: On Windows: Open the <broker_instance_dir> /etc/broker.xml configuration file. Within the core element, add attributes to specify paging limits based on disk usage or number of messages, or both, and specify the action to take if either limit is reached. For example: <address-settings> <address-setting match="match="my.paged.address""> ... <page-limit-bytes>10G</page-limit-bytes> <page-limit-messages>1000000</page-limit-messages> <page-full-policy>FAIL</page-full-policy> ... </address-setting> </address-settings> page-limit-bytes Maximum size, in bytes, of the disk space allowed for paging incoming messages for the address before the broker executes the action specified for the page-full-policy attribute. The value that you specify supports byte notation such as "K", "MB", and "GB". The default value is -1, which means that there is no limit. page-limit-messages Maximum number of incoming message that can be paged for the address before the broker executes the action specified for the page-full-policy attribute. The default value is -1, which means that there is no message limit. page-full-policy Action that the broker takes when a limit set in the page-limit-bytes or page-limit-messages attributes is reached for an address. Valid values are: DROP The broker silently drops any further messages. FAIL The broker drops any further messages and notifies the sending clients In the preceding example, the broker pages message for the my.paged.address address until paging uses 10GB of disk space or until a total of one million messages are paged. Start the broker. On Linux: On Windows: 7.1.5. Controlling the flow of paged messages into memory If AMQ Broker is configured to page messages to disk, the broker reads paged messages and transfers the messages into memory when clients are ready to consume messages. To prevent messages from consuming excess memory, you can limit the memory used by each address for messages that the broker transfers from disk to memory. Important If client applications leave too many messages pending acknowledgment, the broker does not read paged messages until the pending messages are acknowledged, which can cause message starvation on the broker. For example, if the limit for the transfer of paged messages into memory, which is 20 MB by default, is reached, the broker waits for an acknowledgment from a client before it reads any more messages. If, at the same time, clients are waiting to receive sufficient messages before they send an acknowledgment to the broker, which is determined by the batch size used by clients, the broker is starved of messages. To avoid starvation, either increase the broker limits that control the transfer of paged message into memory or reduce the number of delivering messages. You can reduce the number of delivering messages by ensuring that clients either commit message acknowledgments sooner or use a timeout and commit acknowledgments when no more messages are received from the broker. You can see the number and size of delivering messages in a queue's Delivering Count and Delivering Bytes metrics in AMQ Management Console. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. For an address-settings element that you have configured for a matching address or set of addresses, specify limits on the transfer of paged messages into memory. For example: address-settings> <address-setting match="my.paged.address"> ... <max-read-page-messages>104857600</max-read-page-messages> <max-read-page-bytes>20MB</max-read-page-bytes> ... </address-setting> </address-settings> Max-read-page-messages Maximum number of paged messages that the broker can read from disk into memory per-address. The default value is -1, which means that no limit applies. Max-read-page-bytes Maximum size in bytes of paged messages that the broker can read from disk into memory per-address. The default value is 20 MB. When it applies these limits, the broker counts both messages in memory that are ready for delivery to consumers and messages that are currently delivering. If consumers are slow to acknowledge messages, delivering messages can cause the memory or message limit to be reached and prevent the broker from reading new messages into memory. As a result, the broker can be starved of messages. If consumers are slow to acknowledge messages and the configured max-read-page-messages or max-read-page-bytes limits are reached by messages that are currently delivering, specify separate limits on the transfer of paged messages into memory for messages that are currently delivering. For example address-settings> <address-setting match="my.paged.address"> ... <prefetch-page-bytes>20MB</prefetch-page-bytes> <prefetch-page-messages>104857600</prefetch-page-messages> ... </address-setting> </address-settings> prefetch-page-bytes Memory, in bytes, that is available to read paged messages into memory per-queue. The default value is 20 MB. prefetch-page-messages Number of paged messages that the broker can read from disk into memory per-queue. The default value is -1, which means that no limit applies. If you specify limits for the prefetch-page-bytes or prefetch-page-messages parameters to limit the amount of memory or number of messages used by messages currently delivering, set higher limits for the max-read-page-bytes or max-read-page-message parameter to provide capacity to read new messages into memory. Note If the value of the max-read-page-bytes parameter is reached before the value of the prefetch-page-bytes parameter, the broker stops reading further paged messages into memory. 7.1.6. Setting a disk usage threshold You can set a disk usage threshold which, if reached, causes the broker to stop paging and block all incoming messages. Procedure Stop the broker. On Linux: On Windows: Open the <broker_instance_dir> /etc/broker.xml configuration file. Within the core element add the max-disk-usage configuration element and specify a value. For example: <configuration> <core> ... <max-disk-usage>80</max-disk-usage> ... </core> </configuration> max-disk-usage Maximum percentage of the available disk space that the broker can use. When this limit is reached, the broker blocks incoming messages. The default value is 90 . In the preceding example, the broker is limited to using eighty percent of available disk space. Start the broker. On Linux: On Windows: 7.2. Configuring message dropping Section 7.1.2, "Configuring an address for paging" shows how to configure an address for paging. As part of that procedure, you set the value of address-full-policy to PAGE . To drop messages (rather than paging them) when an address reaches its specified maximum size, set the value of the address-full-policy to one of the following: DROP When the maximum size of a given address has been reached, the broker silently drops any further messages. FAIL When the maximum size of a given address has been reached, the broker drops any further messages and issues exceptions to producers. 7.3. Configuring message blocking The following procedures show how to configure message blocking when a given address reaches the maximum size limit that you have specified. Note You can configure message blocking only for the Core, OpenWire, and AMQP protocols. 7.3.1. Blocking Core and OpenWire producers The following procedure shows how to configure message blocking for Core and OpenWire message producers when a given address reaches the maximum size limit that you have specified. Prerequisites You should be familiar with how to configure addresses and address settings. For more information, see Chapter 4, Configuring addresses and queues . Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. For an address-setting element that you have configured for a matching address or set of addresses, add configuration elements to define message blocking behavior. For example: <address-settings> <address-setting match="my.blocking.address"> ... <max-size-bytes>300000</max-size-bytes> <address-full-policy>BLOCK</address-full-policy> ... </address-setting> </address-settings> max-size-bytes Maximum size, in bytes, of the memory allowed for the address before the broker executes the policy specified for address-full-policy . The value that you specify also supports byte notation such as "K", "MB", and "GB". Note If you specify max-size-bytes in an address-setting element, the value applies to each matching address. Specifying this value does not mean that the total size of all matching addresses is limited to the value of max-size-bytes . address-full-policy Action that the broker takes when then the maximum size for an address has been reached. In the preceding example, when messages sent to the address my.blocking.address exceed 300000 bytes in memory, the broker begins blocking further messages from Core or OpenWire message producers. 7.3.2. Blocking AMQP producers Protocols such as Core and OpenWire use a window-size flow control system. In this system, credits represent bytes and are allocated to producers. If a producer wants to send a message, the producer must wait until it has sufficient credits for the size of the message. By contrast, AMQP flow control credits do not represent bytes. Instead, AMQP credits represent the number of messages a producer is permitted to send, regardless of message size. Therefore, it is possible, in some situations, for AMQP producers to significantly exceed the max-size-bytes value of an address. Therefore, to block AMQP producers, you must use a different configuration element, max-size-bytes-reject-threshold . For a matching address or set of addresses, this element specifies the maximum size, in bytes, of all AMQP messages in memory. When the total size of all messages in memory reaches the specified limit, the broker blocks AMQP producers from sending further messages. The following procedure shows how to configure message blocking for AMQP message producers. Prerequisites You should be familiar with how to configure addresses and address settings. For more information, see Chapter 4, Configuring addresses and queues . Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. For an address-setting element that you have configured for a matching address or set of addresses, specify the maximum size of all AMQP messages in memory. For example: <address-settings> <address-setting match="my.amqp.blocking.address"> ... <max-size-bytes-reject-threshold>300000</max-size-bytes-reject-threshold> ... </address-setting> </address-settings> max-size-bytes-reject-threshold Maximum size, in bytes, of the memory allowed for the address before the broker blocks further AMQP messages. The value that you specify also supports byte notation such as "K", "MB", and "GB". By default, max-size-bytes-reject-threshold is set to -1 , which means that there is no maximum size. Note If you specify max-size-bytes-reject-threshold in an address-setting element, the value applies to each matching address. Specifying this value does not mean that the total size of all matching addresses is limited to the value of max-size-bytes-reject-threshold . In the preceding example, when messages sent to the address my.amqp.blocking.address exceed 300000 bytes in memory, the broker begins blocking further messages from AMQP producers. 7.4. Understanding memory usage on multicast addresses When a message is routed to an address that has multicast queues bound to it, there is only one copy of the message in memory. Each queue has only a reference to the message. Because of this, the associated memory is released only after all queues referencing the message have delivered it. In this type of situation, if you have a slow consumer, the entire address might experience a negative performance impact. For example, consider this scenario: An address has ten queues that use the multicast routing type. Due to a slow consumer, one of the queues does not deliver its messages. The other nine queues continue to deliver messages and are empty. Messages continue to arrive to the address. The queue with the slow consumer continues to accumulate references to the messages, causing the broker to keep the messages in memory. When the maximum size of the address is reached, the broker starts to page messages. In this scenario because of a single slow consumer, consumers on all queues are forced to consume messages from the page system, requiring additional IO. Additional resources To learn how to configure flow control to regulate the flow of data between the broker and producers and consumers, see Flow control in the AMQ Core Protocol JMS documentation.
[ "<configuration ...> <core ...> <paging-directory> /path/to/paging-directory </paging-directory> </core> </configuration>", "<address-settings> <address-setting match=\"my.paged.address\"> <max-size-bytes>104857600</max-size-bytes> <max-size-messages>20000</max-size-messages> <page-size-bytes>10485760</page-size-bytes> <address-full-policy>PAGE</address-full-policy> </address-setting> </address-settings>", "<broker_instance_dir> /bin/artemis stop", "<broker_instance_dir> \\bin\\artemis-service.exe stop", "<configuration> <core> <global-max-size>1GB</global-max-size> <global-max-messages>900000</global-max-messages> </core> </configuration>", "<broker_instance_dir> /bin/artemis run", "<broker_instance_dir> \\bin\\artemis-service.exe start", "<broker_instance_dir> /bin/artemis stop", "<broker_instance_dir> \\bin\\artemis-service.exe stop", "<address-settings> <address-setting match=\"match=\"my.paged.address\"\"> <page-limit-bytes>10G</page-limit-bytes> <page-limit-messages>1000000</page-limit-messages> <page-full-policy>FAIL</page-full-policy> </address-setting> </address-settings>", "<broker_instance_dir> /bin/artemis run", "<broker_instance_dir> \\bin\\artemis-service.exe start", "address-settings> <address-setting match=\"my.paged.address\"> <max-read-page-messages>104857600</max-read-page-messages> <max-read-page-bytes>20MB</max-read-page-bytes> </address-setting> </address-settings>", "address-settings> <address-setting match=\"my.paged.address\"> <prefetch-page-bytes>20MB</prefetch-page-bytes> <prefetch-page-messages>104857600</prefetch-page-messages> </address-setting> </address-settings>", "<broker_instance_dir> /bin/artemis stop", "<broker_instance_dir> \\bin\\artemis-service.exe stop", "<configuration> <core> <max-disk-usage>80</max-disk-usage> </core> </configuration>", "<broker_instance_dir> /bin/artemis run", "<broker_instance_dir> \\bin\\artemis-service.exe start", "<address-settings> <address-setting match=\"my.blocking.address\"> <max-size-bytes>300000</max-size-bytes> <address-full-policy>BLOCK</address-full-policy> </address-setting> </address-settings>", "<address-settings> <address-setting match=\"my.amqp.blocking.address\"> <max-size-bytes-reject-threshold>300000</max-size-bytes-reject-threshold> </address-setting> </address-settings>" ]
https://docs.redhat.com/en/documentation/red_hat_amq_broker/7.12/html/configuring_amq_broker/assembly-br-configuring-maximum-memory-usage-for-addresses_configuring
Chapter 35. System and Subscription Management
Chapter 35. System and Subscription Management Undercloud no longer fails on a system with no configured repositories Previously, when the user tried to install the OpenStack Undercloud on a system with no configured repositories, the yum package manager required installation of MySQL dependencies which have been already installed. As a conseqence, the Undercloud install script failed. To fix the bug, yum has been fixed to correctly detect already installed MySQL dependencies. As a result, the Undercloud install script no longer fails on a system with no configured repositories. (BZ#1352585) the yum commands provided by the yum-plugin-verify now set the exit status to 1 if any mismatches are found The yum commands provided by the yum-plugin-verify plug-in returned exit code 0 for any discrepancies found in a package. The bug has been fixed, and the exit status is now set to 1 in case any mismatches are found. (BZ#1406891)
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.4_release_notes/bug_fixes_system_and_subscription_management
Backup and restore
Backup and restore Red Hat Advanced Cluster Security for Kubernetes 4.5 Backing up and restoring Red Hat Advanced Cluster Security for Kubernetes Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.5/html/backup_and_restore/index
5.308. spice-protocol
5.308. spice-protocol 5.308.1. RHEA-2012:0760 - spice-protocol enhancement update An updated spice-protocol package that adds several enhancements is now available for Red Hat Enterprise Linux 6. The spice-protocol package contains header files that describe the SPICE protocol and the QXL para-virtualized graphics card. The SPICE protocol is needed to build newer versions of the spice-client and the spice-server packages. BZ# 758088 The spice-protocol package has been upgraded to upstream version 0.10.1, which provides a number of enhancements over the version, including support for USB redirection. All users who build spice packages are advised to upgrade to this updated package, which adds these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/spice-protocol
Chapter 1. Getting started overview
Chapter 1. Getting started overview Use Red Hat Streams for Apache Kafka to create and set up Kafka clusters, then connect your applications and services to those clusters. This guide describes how to install and start using Streams for Apache Kafka on OpenShift Container Platform. You can install the Streams for Apache Kafka operator directly from the OperatorHub in the OpenShift web console. The Streams for Apache Kafka operator understands how to install and manage Kafka components. Installing from the OperatorHub provides a standard configuration of Streams for Apache Kafka that allows you to take advantage of automatic updates. When the Streams for Apache Kafka operator is installed, it provides the resources to install instances of Kafka components. After installing a Kafka cluster, you can start producing and consuming messages. Note If you require more flexibility with your deployment, you can use the installation artifacts provided with Streams for Apache Kafka. For more information on using the installation artifacts, see Deploying and Managing Streams for Apache Kafka on OpenShift . 1.1. Prerequisites The following prerequisites are required for getting started with Streams for Apache Kafka. You have a Red Hat account. JDK 11 or later is installed. An OpenShift 4.14 and later cluster is available. The OpenShift oc command-line tool is installed and configured to connect to the running cluster. The steps to get started are based on using the OperatorHub in the OpenShift web console, but you'll also use the OpenShift oc CLI tool to perform certain operations. You'll need to connect to your OpenShift cluster using the oc tool. You can install the oc CLI tool from the web console by clicking the '?' help menu, then Command Line Tools . You can copy the required oc login details from the web console by clicking your profile name, then Copy login command . 1.2. Additional resources Streams for Apache Kafka Overview Deploying and Upgrading Streams for Apache Kafka on OpenShift
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/getting_started_with_streams_for_apache_kafka_on_openshift/getting_started_overview
Chapter 4. Execution environments
Chapter 4. Execution environments Troubleshoot issues with execution environments. 4.1. Issue - Cannot select the "Use in Controller" option for execution environment image on private automation hub You cannot use the Use in Controller option for an execution environment image on private automation hub. You also receive the error message: "No Controllers available". To resolve this issue, connect automation controller to your private automation hub instance. Procedure Change the /etc/pulp/settings.py file on private automation hub and add one of the following parameters depending on your configuration: Single controller CONNECTED_ANSIBLE_CONTROLLERS = [' <https://my.controller.node> '] Many controllers behind a load balancer CONNECTED_ANSIBLE_CONTROLLERS = [' <https://my.controller.loadbalancer> '] Many controllers without a load balancer CONNECTED_ANSIBLE_CONTROLLERS = [' <https://my.controller.node1> ', ' <https://my.controller2.node2> '] Stop all of the private automation hub services: # systemctl stop pulpcore.service pulpcore-api.service pulpcore-content.service [email protected] [email protected] nginx.service redis.service Restart all of the private automation hub services: # systemctl start pulpcore.service pulpcore-api.service pulpcore-content.service [email protected] [email protected] nginx.service redis.service Verification Verify that you can now use the Use in Controller option in private automation hub.
[ "CONNECTED_ANSIBLE_CONTROLLERS = [' <https://my.controller.node> ']", "CONNECTED_ANSIBLE_CONTROLLERS = [' <https://my.controller.loadbalancer> ']", "CONNECTED_ANSIBLE_CONTROLLERS = [' <https://my.controller.node1> ', ' <https://my.controller2.node2> ']", "systemctl stop pulpcore.service pulpcore-api.service pulpcore-content.service [email protected] [email protected] nginx.service redis.service", "systemctl start pulpcore.service pulpcore-api.service pulpcore-content.service [email protected] [email protected] nginx.service redis.service" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/troubleshooting_ansible_automation_platform/troubleshoot-execution-environments
Chapter 12. Upgrading
Chapter 12. Upgrading For version upgrades, the Red Hat build of OpenTelemetry Operator uses the Operator Lifecycle Manager (OLM), which controls installation, upgrade, and role-based access control (RBAC) of Operators in a cluster. The OLM runs in the OpenShift Container Platform by default. The OLM queries for available Operators as well as upgrades for installed Operators. When the Red Hat build of OpenTelemetry Operator is upgraded to the new version, it scans for running OpenTelemetry Collector instances that it manages and upgrades them to the version corresponding to the Operator's new version. 12.1. Additional resources Operator Lifecycle Manager concepts and resources Updating installed Operators
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/red_hat_build_of_opentelemetry/dist-tracing-otel-updating
Chapter 3. Distribution of content in RHEL 8
Chapter 3. Distribution of content in RHEL 8 3.1. Installation Red Hat Enterprise Linux 8 is installed using ISO images. Two types of ISO image are available for the AMD64, Intel 64-bit, 64-bit ARM, IBM Power Systems, and IBM Z architectures: Binary DVD ISO: A full installation image that contains the BaseOS and AppStream repositories and allows you to complete the installation without additional repositories. Note The Installation ISO image is in multiple GB size, and as a result, it might not fit on optical media formats. A USB key or USB hard drive is recommended when using the Installation ISO image to create bootable installation media. You can also use the Image Builder tool to create customized RHEL images. For more information about Image Builder, see the Composing a customized RHEL system image document. Boot ISO: A minimal boot ISO image that is used to boot into the installation program. This option requires access to the BaseOS and AppStream repositories to install software packages. The repositories are part of the Binary DVD ISO image. See the Interactively installing RHEL from installation media document for instructions on downloading ISO images, creating installation media, and completing a RHEL installation. For automated Kickstart installations and other advanced topics, see the Automatically installing RHEL document. For a list of users and groups created by RPMs in a base RHEL installation, and the steps to obtain this list, see the What are all of the users and groups in a base RHEL installation? Knowledgebase article. 3.2. Repositories Red Hat Enterprise Linux 8 is distributed through two main repositories: BaseOS AppStream Both repositories are required for a basic RHEL installation, and are available with all RHEL subscriptions. Content in the BaseOS repository is intended to provide the core set of the underlying OS functionality that provides the foundation for all installations. This content is available in the RPM format and is subject to support terms similar to those in releases of RHEL. For a list of packages distributed through BaseOS, see the Package manifest . Content in the Application Stream repository includes additional user space applications, runtime languages, and databases in support of the varied workloads and use cases. Application Streams are available in the familiar RPM format, as an extension to the RPM format called modules , or as Software Collections. For a list of packages available in AppStream, see the Package manifest . In addition, the CodeReady Linux Builder repository is available with all RHEL subscriptions. It provides additional packages for use by developers. Packages included in the CodeReady Linux Builder repository are unsupported. For more information about RHEL 8 repositories, see the Package manifest . 3.3. Application Streams Red Hat Enterprise Linux 8 introduces the concept of Application Streams. Multiple versions of user space components are now delivered and updated more frequently than the core operating system packages. This provides greater flexibility to customize Red Hat Enterprise Linux without impacting the underlying stability of the platform or specific deployments. Components made available as Application Streams can be packaged as modules or RPM packages and are delivered through the AppStream repository in RHEL 8. Each Application Stream component has a given life cycle, either the same as RHEL 8 or shorter. For details, see Red Hat Enterprise Linux Life Cycle . Modules are collections of packages representing a logical unit: an application, a language stack, a database, or a set of tools. These packages are built, tested, and released together. Module streams represent versions of the Application Stream components. For example, several streams (versions) of the PostgreSQL database server are available in the postgresql module with the default postgresql:10 stream. Only one module stream can be installed on the system. Different versions can be used in separate containers. Detailed module commands are described in the Installing, managing, and removing user-space components document. For a list of modules available in AppStream, see the Package manifest . 3.4. Package management with YUM/DNF On Red Hat Enterprise Linux 8, installing software is ensured by the YUM tool, which is based on the DNF technology. We deliberately adhere to usage of the yum term for consistency with major versions of RHEL. However, if you type dnf instead of yum , the command works as expected because yum is an alias to dnf for compatibility. For more details, see the following documentation: Installing, managing, and removing user-space components Considerations in adopting RHEL 8
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.7_release_notes/distribution-of-content-in-rhel-8
Chapter 6. Managing DNS Using Capsule
Chapter 6. Managing DNS Using Capsule Satellite can manage DNS records using your Capsule. DNS management contains updating and removing DNS records from existing DNS zones. A Capsule has multiple DNS providers that you can use to integrate Satellite with your existing DNS infrastructure or deploy a new one. After you have enabled DNS, your Capsule can manipulate any DNS server that complies with RFC 2136 using the dns_nsupdate provider. Other providers provide more direct integration, such as dns_infoblox for Infoblox . Available DNS Providers dns_infoblox - For more information, see Using Infoblox as DHCP and DNS Providers in Provisioning Hosts . dns_nsupdate - Dynamic DNS update using nsupdate. For more information, see Using Infoblox as DHCP and DNS Providers in Provisioning Hosts . dns_nsupdate_gss - Dynamic DNS update with GSS-TSIG. For more information, see Section 4.4.1, "Configuring Dynamic DNS Update with GSS-TSIG Authentication" .
null
https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/installing_capsule_server/managing_dns_using_smart_proxy_capsule
Chapter 5. Visualizing power monitoring metrics
Chapter 5. Visualizing power monitoring metrics Important Power monitoring is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can visualize power monitoring metrics in the OpenShift Container Platform web console by accessing power monitoring dashboards or by exploring Metrics under the Observe tab. 5.1. Power monitoring dashboards overview There are two types of power monitoring dashboards. Both provide different levels of details around power consumption metrics for a single cluster: Power Monitoring / Overview dashboard With this dashboard, you can observe the following information: An aggregated view of CPU architecture and its power source ( rapl-sysfs , rapl-msr , or estimator ) along with total nodes with this configuration Total energy consumption by a cluster in the last 24 hours (measured in kilowatt-hour) The amount of power consumed by the top 10 namespaces in a cluster in the last 24 hours Detailed node information, such as its CPU architecture and component power source These features allow you to effectively monitor the energy consumption of the cluster without needing to investigate each namespace separately. Warning Ensure that the Components Source column does not display estimator as the power source. Figure 5.1. The Detailed Node Information table with rapl-sysfs as the component power source If Kepler is unable to obtain hardware power consumption metrics, the Components Source column displays estimator as the power source, which is not supported in Technology Preview. If that happens, then the values from the nodes are not accurate. Power Monitoring / Namespace dashboard This dashboard allows you to view metrics by namespace and pod. You can observe the following information: The power consumption metrics, such as consumption in DRAM and PKG The energy consumption metrics in the last hour, such as consumption in DRAM and PKG for core and uncore components This feature allows you to investigate key peaks and easily identify the primary root causes of high consumption. 5.2. Accessing power monitoring dashboards as a cluster administrator You can access power monitoring dashboards from the Administrator perspective of the OpenShift Container Platform web console. Prerequisites You have access to the OpenShift Container Platform web console. You are logged in as a user with the cluster-admin role. You have installed the Power monitoring Operator. You have deployed Kepler in your cluster. You have enabled monitoring for user-defined projects. Procedure In the Administrator perspective of the web console, go to Observe Dashboards . From the Dashboard drop-down list, select the power monitoring dashboard you want to see: Power Monitoring / Overview Power Monitoring / Namespace 5.3. Accessing power monitoring dashboards as a developer You can access power monitoring dashboards from the Developer perspective of the OpenShift Container Platform web console. Prerequisites You have access to the OpenShift Container Platform web console. You have access to the cluster as a developer or as a user. You have installed the Power monitoring Operator. You have deployed Kepler in your cluster. You have enabled monitoring for user-defined projects. You have view permissions for the namespace openshift-power-monitoring , the namespace where Kepler is deployed to. Procedure In the Developer perspective of the web console, go to Observe Dashboard . From the Dashboard drop-down list, select the power monitoring dashboard you want to see: Power Monitoring / Overview 5.4. Power monitoring metrics overview The Power monitoring Operator exposes the following metrics, which you can view by using the OpenShift Container Platform web console under the Observe Metrics tab. Warning This list of exposed metrics is not definitive. Metrics might be added or removed in future releases. Table 5.1. Power monitoring Operator metrics Metric name Description kepler_container_joules_total The aggregated package or socket energy consumption of CPU, DRAM, and other host components by a container. kepler_container_core_joules_total The total energy consumption across CPU cores used by a container. If the system has access to RAPL_ metrics, this metric reflects the proportional container energy consumption of the RAPL Power Plan 0 (PP0), which is the energy consumed by all CPU cores in the socket. kepler_container_dram_joules_total The total energy consumption of DRAM by a container. kepler_container_uncore_joules_total The cumulative energy consumption by uncore components used by a container. The number of components might vary depending on the system. The uncore metric is processor model-specific and might not be available on some server CPUs. kepler_container_package_joules_total The cumulative energy consumed by the CPU socket used by a container. It includes all core and uncore components. kepler_container_other_joules_total The cumulative energy consumption of host components, excluding CPU and DRAM, used by a container. Generally, this metric is the energy consumption of ACPI hosts. kepler_container_bpf_cpu_time_us_total The total CPU time used by the container that utilizes the BPF tracing. kepler_container_cpu_cycles_total The total CPU cycles used by the container that utilizes hardware counters. CPU cycles is a metric directly related to CPU frequency. On systems where processors run at a fixed frequency, CPU cycles and total CPU time are roughly equivalent. On systems where processors run at varying frequencies, CPU cycles and total CPU time have different values. kepler_container_cpu_instructions_total The total CPU instructions used by the container that utilizes hardware counters. CPU instructions is a metric that accounts how the CPU is used. kepler_container_cache_miss_total The total cache miss that occurs for a container that uses hardware counters. kepler_container_cgroupfs_cpu_usage_us_total The total CPU time used by a container reading from control group statistics. kepler_container_cgroupfs_memory_usage_bytes_total The total memory in bytes used by a container reading from control group statistics. kepler_container_cgroupfs_system_cpu_usage_us_total The total CPU time in kernel space used by the container reading from control group statistics. kepler_container_cgroupfs_user_cpu_usage_us_total The total CPU time in user space used by a container reading from control group statistics. kepler_container_bpf_net_tx_irq_total The total number of packets transmitted to network cards of a container that uses the BPF tracing. kepler_container_bpf_net_rx_irq_total The total number of packets received from network cards of a container that uses the BPF tracing. kepler_container_bpf_block_irq_total The total number of block I/O calls of a container that uses the BPF tracing. kepler_node_info The node metadata, such as the node CPU architecture. kepler_node_core_joules_total The total energy consumption across CPU cores used by all containers running on a node and operating system. kepler_node_uncore_joules_total The cumulative energy consumption by uncore components used by all containers running on the node and operating system. The number of components might vary depending on the system. kepler_node_dram_joules_total The total energy consumption of DRAM by all containers running on the node and operating system. kepler_node_package_joules_total The cumulative energy consumed by the CPU socket used by all containers running on the node and operating system. It includes all core and uncore components. kepler_node_other_host_components_joules_total The cumulative energy consumption of host components, excluding CPU and DRAM, used by all containers running on the node and operating system. Generally, this metric is the energy consumption of ACPI hosts. kepler_node_platform_joules_total The total energy consumption of the host. Generally, this metric is the host energy consumption from Redfish BMC or ACPI. kepler_node_energy_stat Multiple metrics from nodes labeled with container resource utilization control group metrics that are used in the model server. kepler_node_accelerator_intel_qat The utilization of the accelerator Intel QAT on a certain node. If the system contains Intel QATs, Kepler can calculate the utilization of the node's QATs through telemetry. 5.5. Additional resources Enabling monitoring for user-defined projects
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/power_monitoring/visualizing-power-monitoring-metrics
Chapter 1. AWS DynamoDB Sink
Chapter 1. AWS DynamoDB Sink Send data to AWS DynamoDB service. The sent data will insert/update/delete an item on the given AWS DynamoDB table. Access Key/Secret Key are the basic method for authenticating to the AWS DynamoDB service. These parameters are optional, because the Kamelet also provides the following option 'useDefaultCredentialsProvider'. When using a default Credentials Provider the AWS DynamoDB client will load the credentials through this provider and won't use the static credential. This is the reason for not having access key and secret key as mandatory parameters for this Kamelet. This Kamelet expects a JSON field as body. The mapping between the JSON fields and table attribute values is done by key, so if you have the input as follows: {"username":"oscerd", "city":"Rome"} The Kamelet will insert/update an item in the given AWS DynamoDB table and set the attributes 'username' and 'city' respectively. Please note that the JSON object must include the primary key values that define the item. 1.1. Configuration Options The following table summarizes the configuration options available for the aws-ddb-sink Kamelet: Property Name Description Type Default Example region * AWS Region The AWS region to connect to string "eu-west-1" table * Table Name of the DynamoDB table to look at string accessKey Access Key The access key obtained from AWS string operation Operation The operation to perform (one of PutItem, UpdateItem, DeleteItem) string "PutItem" "PutItem" overrideEndpoint Endpoint Overwrite Set the need for overiding the endpoint URI. This option needs to be used in combination with uriEndpointOverride setting. boolean false secretKey Secret Key The secret key obtained from AWS string uriEndpointOverride Overwrite Endpoint URI Set the overriding endpoint URI. This option needs to be used in combination with overrideEndpoint option. string useDefaultCredentialsProvider Default Credentials Provider Set whether the DynamoDB client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in. boolean false writeCapacity Write Capacity The provisioned throughput to reserved for writing resources to your table integer 1 Note Fields marked with an asterisk (*) are mandatory. 1.2. Dependencies At runtime, the aws-ddb-sink Kamelet relies upon the presence of the following dependencies: mvn:org.apache.camel.kamelets:camel-kamelets-utils:1.8.0 camel:core camel:jackson camel:aws2-ddb camel:kamelet 1.3. Usage This section describes how you can use the aws-ddb-sink . 1.3.1. Knative Sink You can use the aws-ddb-sink Kamelet as a Knative sink by binding it to a Knative object. aws-ddb-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: aws-ddb-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: aws-ddb-sink properties: region: "eu-west-1" table: "The Table" 1.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 1.3.1.2. Procedure for using the cluster CLI Save the aws-ddb-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: 1.3.1.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: This command creates the KameletBinding in the current namespace on the cluster. 1.3.2. Kafka Sink You can use the aws-ddb-sink Kamelet as a Kafka sink by binding it to a Kafka topic. aws-ddb-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: aws-ddb-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: aws-ddb-sink properties: region: "eu-west-1" table: "The Table" 1.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 1.3.2.2. Procedure for using the cluster CLI Save the aws-ddb-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: 1.3.2.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: This command creates the KameletBinding in the current namespace on the cluster. 1.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/aws-ddb-sink.kamelet.yaml
[ "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: aws-ddb-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: aws-ddb-sink properties: region: \"eu-west-1\" table: \"The Table\"", "apply -f aws-ddb-sink-binding.yaml", "kamel bind channel:mychannel aws-ddb-sink -p \"sink.region=eu-west-1\" -p \"sink.table=The Table\"", "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: aws-ddb-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: aws-ddb-sink properties: region: \"eu-west-1\" table: \"The Table\"", "apply -f aws-ddb-sink-binding.yaml", "kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic aws-ddb-sink -p \"sink.region=eu-west-1\" -p \"sink.table=The Table\"" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.9/html/kamelets_reference/aws-ddb-sink
4.5.2. Virtualization Optimizations
4.5.2. Virtualization Optimizations Because KVM utilizes kernel functionality, KVM-based virtualized guests immediately benefit from all bare-metal optimizations. Red Hat Enterprise Linux also includes a number of enhancements to allow virtualized guests to approach the performance level of a bare-metal system. These enhancements focus on the I/O path in storage and network access, allowing even intensive workloads such as database and file-serving to make use of virtualized deployment. NUMA-specific enhancements that improve the performance of virtualized systems include: CPU pinning Virtual guests can be bound to run on a specific socket in order to optimize local cache use and remove the need for expensive inter-socket communications and remote memory access. transparent hugepages (THP) With THP enabled, the system automatically performs NUMA-aware memory allocation requests for large contiguous amounts of memory, reducing both lock contention and the number of translation lookaside buffer (TLB) memory management operations required and yielding a performance increase of up to 20% in virtual guests. kernel-based I/O implementation The virtual guest I/O subsystem is now implemented in the kernel, greatly reducing the expense of inter-node communication and memory access by avoiding a significant amount of context switching, and synchronization and communication overhead.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/performance_tuning_guide/ch04s05s02
4.11. RHEA-2012:0825 - new package: ledmon
4.11. RHEA-2012:0825 - new package: ledmon A new ledmon package is now available for Red Hat Enterprise Linux 6. The ledmon and ledctl utilities are user space applications designed to control LEDs associated with each slot in an enclosure or a drive bay. There are two types of systems: 2-LED system (Activity LED, Status LED) and 3-LED system (Activity LED, Locate LED, Fail LED). Users must have root privileges to use this application. This enhancement update adds the ledmon package to Red Hat Enterprise Linux 6. (BZ# 750379 ) All users who require ledmon are advised to install this new package.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/ledmon
Chapter 1. Red Hat Enterprise Linux AI command line interface reference
Chapter 1. Red Hat Enterprise Linux AI command line interface reference This reference provides descriptions and examples for the Red Hat Enterprise Linux AI CLI ( ilab ) commands. 1.1. Red Hat Enterprise Linux AI CLI commands 1.1.1. ilab config Command Group for Interacting with the configuration of InstructLab Example usage 1.1.1.1. ilab config init Initializes environment for InstructLab Example usage 1.1.1.2. ilab config show Displays current state of the config file stored at ~/.config/instructlab/config.yaml Example usage 1.1.1.3. ilab config edit Allows you to edit the config stored at ~/.config/config.yaml Example usage 1.1.2. ilab data Command Group for Interacting with the data generated by InstructLab Example usage 1.1.2.1. ilab data generate Runs the synthetic data generation (SDG) process for InstructLab Example usage 1.1.2.2. ilab data list Displays every dataset in the datasets directory, ` ~/.local/instructlab/datasets` , on your machine Example usage 1.1.3. ilab model Command Group for Interacting with the models in InstructLab Example usage 1.1.3.1. ilab model chat Run a chat using the modified model Example usage 1.1.3.2. ilab model download Downloads the model(s) Example usage 1.1.3.3. ilab model evaluate Runs the evaluation process on the model Example usage 1.1.3.4. ilab model list Lists all the models installed on your system Example usage 1.1.3.5. ilab model train Runs the training process on the model Example usage 1.1.3.6. ilab model serve Serves the model on an endpoint Example usage 1.1.4. ilab system Command group for all system-related commands Example usage 1.1.4.1. ilab system info Displays the hardware specifications of your system Example usage 1.1.5. ilab taxonomy Command Group for Interacting with the taxonomy path of InstructLab Example usage 1.1.5.1. ilab taxonomy diff Lists taxonomy files that you changed and verifies that the taxonomy is valid Example usage
[ "Prints the usable commands in the config group ilab config", "Set up the InstructLab environment ilab config init", "Shows the `config.yaml` file on your system ilab config show", "Opens a vim shell where you can edit your config file ilab config edit", "Prints the usable commands in the data group ilab data", "Runs the SDG process on the default model, the default model is specified in the `~/.config/config.yaml` ilab data generate Runs the SDG process on a selected model ilab data generate --model <model-name> Runs the SDG process on the customized taxonomy path ilab data generate --taxonomy-path <path-to-taxonomy> Edits the `config.yaml` to use a specified number of GPUs in SDG ilab data generate --gpus <num-gpus>", "List every dataset in the datasets directory ilab data list", "Prints the usable commands in the model group ilab model", "Creates a virtual environment to chat with the model ilab model chat Creates a virtual environment to chat with a specified model ilab model chat --model <model-name>", "Downloads the default models ilab model download Downloads the models from a specific repository ilab model download --repository <name-of-repository>", "Runs the evaluation process on the MMLU benchmark ilab model evaluate --benchmark mmlu Runs the evaluation process on the MT_BENCH benchmark ilab model evaluate --benchmark mt_bench Runs the evaluation process on the MMLU_BRANCH benchmark ilab model evaluate --benchmark mmlu_branch Runs the evaluation process on the MT_BENCH_BRANCH benchmark ilab model evaluate --benchmark mt_bench_branch", "* List all the installed models ilab model list", "Runs the training process on the default model from the config.yaml ilab model train Runs the training process on a specified model ilab model train --model-name <name-of-model>", "Serves the default model to the server ilab model serve Serves the specified model to the server ilab model serve --model-path <path-to-model> Serves the default model using a specified number of GPUs ilab model serve --gpus <num-gpus>", "Prints the usable commands in the system group ilab system", "#Prints the hardware specifications of your machine ilab system info", "Prints the usable commands in the taxonomy group ilab taxonomy", "Prints the taxonomy files you changed and verifies that the taxonomy is valid ilab taxonomy diff Prints the taxonomy files in a specified path and verifies that the taxonomy is valid ilab taxonomy diff --taxonomy-path <path-to-taxonomy>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.4/html/cli_reference/cli_reference
14.5. Configuring the LDAP Database
14.5. Configuring the LDAP Database The Certificate System performs certificate- and key-management functions in response to the requests it receives. These functions include the following: Storing and retrieving certificate requests Storing and retrieving certificate records Storing CRLs Storing ACLs Storing privileged user and role information Storing and retrieving end users' encryption private key records To fulfill these functions, the Certificate System is incorporated with a Red Hat Directory Server, referred to as the internal database or local database . The Directory Server is referenced as part of the Certificate System configuration; when the Certificate System subsystem is configured, a new database is created within the Directory Server. This database is used as an embedded database exclusively by the Certificate System instance and can be managed using directory management tools that come with the Directory Server. The Certificate System instance database is listed with the other Directory Server databases in the serverRoot /slapd- DS_name /db/ directory. These databases are named by the value determined by the value of the pki_ds_database variable under the specified subsystem section within the /etc/pki/default.cfg file ( CS_instance_name-CA , CS_instance_name-KRA , CS_instance_name-OCSP , CS_instance_name-TKS , and CS_instance_name-TPS by default), which is the default format given during the instance configuration. For example, for a Certificate Manager named ca1 , the database name would be ca1-CA . Similarly, the database name is determined by the value of the pki_ds_base_dn variable under the specified subsystem section within the /etc/pki/default.cfg file ((o=CS_instance_name-CA, o=CS_instance_name-KRA, o=CS_instance_name-OCSP, o=CS_instance_name-TKS, or o=CS_instance_name-TPS by default), and is also set during the configuration. The subsystems use the database for storing different objects. A Certificate Manager stores all the data, certificate requests, certificates, CRLs, and related information, while a KRA only stores key records and related data. Warning The internal database schema are configured to store only Certificate System data. Do not make any changes to it or configure the Certificate System to use any other LDAP directory. Doing so can result in data loss. Additionally, do not use the internal LDAP database for any other purpose. 14.5.1. Changing the Internal Database Configuration To change the Directory Server instance that a subsystem instance uses as its internal database: Log into the subsystem administrative console. In the Configuration tab, select the Internal Database tab. Change the Directory Server instance by changing the hostname, port, and bind DN fields. The hostname is the fully qualified hostname of the machine on which the Directory Server is installed, such as certificates.example.com . The Certificate System uses this name to access the directory. By default, the hostname of the Directory Server instance used as the internal database is shown as localhost instead of the actual hostname. This is done to insulate the internal database from being visible outside the system since a server on localhost can only be accessed from the local machine. Thus, the default configuration minimizes the risk of someone connecting to this Directory Server instance from outside the local machine. The hostname can be changed to something other than localhost if the visibility of the internal database can be limited to a local subnet. For example, if the Certificate System and Directory Server are installed on separate machines for load balancing, specify the hostname of the machine in which the Directory Server is installed. The port number is the TCP/IP port used for non-SSL communications with the Directory Server. The DN should be the Directory Manager DN. The Certificate System subsystem uses this DN when it accesses the directory tree to communicate with the directory. Click Save . The configuration is modified. If the changes require restarting the server, a prompt appears with that message. In that case, restart the server. Note pkiconsole is being deprecated. 14.5.2. Using a Certificate Issued by Certificate System in Directory Server To use an encrypted connection to Directory Server when you installed Certificate System, it was necessary to either use a certificate issued by an external Certificate Authority (CA) or a self-signed certificate. However, after setting up the Certificate System CA, administrators often want to replace this certificate with one issued by Certificate System. To replace the TLS certificate used by Directory Server with a certificate issued by Certificate System: On the Directory Server host: Stop the Directory Server instance: Generate a Certificate Signing Request (CSR). For example, to generate a CSR which uses 2048 bit RSA encryption, and to store it in the ~/ds.csr file: Start the Directory Server instance to enable the CA to process the request: Submit the CSR to the Certificate System's CA. For example: On the Certificate System host: Import the CA agent certificate into a Network Security Services (NSS) database to sign the CMC full request: Create a new directory. For example: Initialize the database in the newly created directory: Display the serial number of the CA signing certificate: Use the serial number from the step to download the CA signing certificate into the ~/certs_db/CA.pem file: Import the CA signing certificate into the NSS database: Import the agent certificate: Create the Certificate Management over CMS (CMC) request: Create a configuration file, such as ~/sslserver-cmc-request.cfg , with the following content: Create the CMC request: Submit the CMC request: Create a configuration file, such as ~/sslserver-cmc-submit.cfg , with the following content: Submit the request: Optionally, verify the result: Display the serial number of the Directory Server certificate: Use the serial number from the step to download the certificate: Copy the certificate for Directory Server and the CA certificate to the Directory Server host. For example: Stop Certificate System: On the Directory Server host: Stop the Directory Server instance: Replace the certificates. For details, see the corresponding sections in the Red Hat Directory Server Administration Guide : Remove the old certificate and CA certificate. See Removing a Certificate . Install the CA certificate issued by Certificate System. See Installing a CA Certificate . Install the certificate for Directory Server issued by Certificate System. See Installing a Server Certificate . Start the Directory Server instance: Start Certificate System: Optionally, configure certificate-based authentication. For details, see Section 14.5.3, "Enabling SSL/TLS Client Authentication with the Internal Database" . 14.5.3. Enabling SSL/TLS Client Authentication with the Internal Database Client authentication allows one entity to authenticate to another entity by presenting a certificate. This method of authentication is used by Certificate System agents to log into agent services pages, for example. To use an SSL/TLS connection between a Certificate System instance and the LDAP directory instance that it uses as its internal database, client authentication must be enabled to allow the Certificate System instance to authenticate and bind to the LDAP directory. There are two parts to setting up client authentication. The first is configuring the LDAP directory, such as setting up SSL/TLS and setting ACIs to control the Certificate System instance access. The second is creating a user on the Certificate System instance which it will use to bind to the LDAP directory and setting up its certificate. To configure LDAPS for a PKI instance, see the pkispawn (8) man page (Example: Installing a PKI subsystem with a secure LDAP connection). 14.5.4. Restricting Access to the Internal Database The Red Hat Directory Server Console displays an entry or icon for the Directory Server instance that the Certificate System uses as its internal database. Unlike the Certificate System Console, in which access is restricted to users with Certificate System administrator privileges, the Directory Server Console can be accessed by any user. The user can open the Directory Server Console for the internal database and change to the data stored there, such as deleting users from the Certificate System administrators group or adding his own entry to the group. Access can be restricted to the internal database to only those users who know the Directory Manager DN and password. This password can be changed by modifying the single sign-on password cache. Log into the Directory Server Console. Select the Certificate System internal database entry, and click Open . Select the Configuration tab. In the navigation tree, expand Plug-ins , and select Pass-Through Authentication . In the right pane, deselect the Enable plugin checkbox. Click Save . The server prompts to restart the server. Click the Tasks tab, and click Restart the Directory Server . Close the Directory Server Console. When the server is restarted, open the Directory Server Console for the internal database instance. The Login to Directory dialog box appears; the Distinguished Name field displays the Directory Manager DN; enter the password. The Directory Server Console for the internal database opens only if the correct password is entered.
[ "pkiconsole https://server.example.com: admin_port/subsystem_type", "systemctl stop dirsrv@ instance_name", "PKCS10Client -d /etc/dirsrv/slapd- instance_name / -p password -a rsa -l 2048 -o ~/ds.csr -n \"CN=USDHOSTNAME\" PKCS10Client: Debug: got token. PKCS10Client: Debug: thread token set. PKCS10Client: token Internal Key Storage Token logged in PKCS10Client: key pair generated. PKCS10Client: CertificationRequest created. PKCS10Client: b64encode completes. Keypair private key id: -3387b397ebe254b91c5d6c06dc36618d2ea8b7e6 -----BEGIN CERTIFICATE REQUEST----- -----END CERTIFICATE REQUEST----- PKCS10Client: done. Request written to file: ~/ds.csr", "systemctl start dirsrv@ instance_name", "pki -d /etc/dirsrv/slapd- instance_name / ca-cert-request-submit --profile caServerCert --csr-file ~/ds.csr ----------------------------- Submitted certificate request ----------------------------- Request ID: 13 Type: enrollment Request Status: pending Operation Result: success", "mkdir ~/certs_db/", "certutil -N -d ~/certs_db/", "pki -p 8080 ca-cert-find --name \"CA Signing Certificate\" --------------- 1 entries found --------------- Serial Number: 0x87bbe2d", "pki -p 8080 ca-cert-show 0x87bbe2d --output ~/certs_db/CA.pem", "pki -d ~/certs_db/ -c password client-cert-import \" CA Certificate \" --ca-cert ~/certs_db/CA.pem", "pk12util -d ~/certs_db/ -i ~/.dogtag/instance_name/ca_admin_cert.p12 Enter Password or Pin for \"NSS FIPS 140-2 Certificate DB\": password Enter password for PKCS12 file: password pk12util: PKCS12 IMPORT SUCCESSFUL", "NSS database directory where the CA agent certificate is stored. dbdir= ~/certs_db/ NSS database password. password= password Token name (default is internal). tokenname= internal Nickname for CA agent certificate. nickname= caadmin Request format: pkcs10 or crmf. format=pkcs10 Total number of PKCS10/CRMF requests. numRequests=1 Path to the PKCS10/CRMF request. The content must be in Base-64 encoded format. Multiple files are supported. They must be separated by space. input= ~/ds.csr Path for the CMC request. output= ~/sslserver-cmc-request.bin", "CMCRequest ~/sslserver-cmc-request.cfg The CMC enrollment request in base-64 encoded format: The CMC enrollment request in binary format is stored in ~/sslserver-cmc-request.bin", "PKI server host name. host= server.example.com PKI server port number. port= 8443 Use secure connection. secure=true Use client authentication. clientmode=true NSS database directory where the CA agent certificate is stored. dbdir= ~/certs_db/ NSS database password. password= password Token name (default: internal). tokenname= internal Nickname of CA agent certificate. nickname= caadmin CMC servlet path servlet=/ca/ee/ca/profileSubmitCMCFull?profileId=caCMCserverCert Path for the CMC request. input= ~/sslserver-cmc-request.bin Path for the CMC response. output= ~/sslserver-cmc-response.bin", "HttpClient sslserver-cmc-submit.cfg The response in binary format is stored in ~/sslserver-cmc-response.bin", "CMCResponse -d ~/certs_db/ -i ~/sslserver-cmc-response.bin Number of controls is 1 Control #0: CMCStatusInfoV2 OID: {1 3 6 1 5 5 7 7 25} BodyList: 1 Status: SUCCESS", "pki -p 8080 ca-cert-find --name \"DS Certificate\" --------------- 1 entries found --------------- Serial Number: 0xc3eeb0c", "pki -p 8080 ca-cert-show 0xc3eeb0c --output ~/ds.crt", "scp ~/ds.crt ~/certs_db/CA.pem ds.example.com : ~/", "pki-server stop instance_name", "systemctl stop dirsrv@ instance_name", "systemctl start dirsrv@ instance_name", "pki-server stop instance_name" ]
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/the_internal_ldap_database
Part IV. Gathering Information About the Environment
Part IV. Gathering Information About the Environment
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/part-gathering_information_about_the_environment
Chapter 4. Red Hat JBoss Enterprise Application Platform application migration from Jakarta EE 8 to 10
Chapter 4. Red Hat JBoss Enterprise Application Platform application migration from Jakarta EE 8 to 10 JBoss EAP 8.0 provides support for Jakarta EE 10. Jakarta EE 10 brings a large change to Jakarta EE compared to the Jakarta EE 8 specifications supported by JBoss EAP 7. In this chapter, the compatibility-impacting differences in the Jakarta EE APIs that application developers must be aware of when preparing to migrate their applications from JBoss EAP 7 to JBoss EAP 8.0 are discussed. Note The focus of this chapter is on the differences between Jakarta EE 8 and Jakarta EE 10 that an application developer migrating their application to JBoss EAP 8.0 might need to deal with, and not on how to do the migration. For more information on JBoss EAP 7 to JBoss EAP 8.0 application migration and the tools provided by Red Hat to assist with this, see Simplify your JBoss EAP 8.0 migration with effective tools and Understanding application migration changes . 4.1. The javax to jakarta Package Namespace change By far the largest compatibility-impacting difference between Jakarta EE 8 and EE 10 is the renaming of the EE API Java packages from javax.* to jakarta.* . Following the move of Java EE to the Eclipse Foundation and the establishment of Jakarta EE, Eclipse and Oracle agreed that the Jakarta EE community cannot evolve the javax. package namespace. Therefore, in order to continue to evolve the EE APIs, beginning with Jakarta EE 9, the packages used for all EE APIs have changed from javax.* to jakarta.* . This change does not affect javax packages that are part of Java SE. Adapting to this namespace change is the biggest change involved in migrating an application from JBoss EAP 7 to JBoss EAP 8. Applications migrating to Jakarta EE 10 need to: Update any import statements or other source code uses of EE API classes from the javax package to jakarta Update the names of any EE-specified system properties or other configuration properties whose names that begin with javax. to instead begin with jakarta. Change the name of the resource that identifies the implementation class from META-INF/services/javax.[rest_of_name] to META-INF/services/jakarta.[rest_of_name] for any application-provided implementations of EE interfaces or abstract classes that are bootstrapped using the java.util.ServiceLoader mechanism. Note The Red Hat Migration Toolkit can assist in updating the namespaces in the application source code. For more information, see How to use Red Hat Migration Toolkit for Auto-Migration of an Application to the Jakarta EE 10 Namespace . For cases where source code migration is not an option, the open source Eclipse Transformer project provides bytecode transformation tooling to transform existing Java archives from the javax namespace to jakarta. 4.2. Other Changes Besides the package namespace change, applications written for earlier EE versions may need to adapt to changes made in a number of specifications included in Jakarta EE 10. The following sections describe these changes, which are mostly removals of long-deprecated API elements. In the following sections, for any instances of API elements that have been removed that use the javax namespace, the equivalent removal has been done in the jakarta namespace used in Jakarta EE 9. Therefore, if you have updated your application to replace the javax namespace with jakarta , assume that the items that mention javax are applicable for your application. 4.2.1. Jakarta Contexts and Dependency Injection Bean Discovery As per the CDI 4.0 spec change notes , the default behavior for discovering Contexts and Dependency Injection or CDI beans in a deployment with an empty beans.xml file has changed from all to annotated . This means that for such a deployment only deployment classes with a bean defining annotation is discovered by CDI. If all application classes using beans have such an annotation, this CDI change will have no impact. Otherwise, an application deployment might fail when CDI cannot find a type that provides a particular bean. If your application is impacted by this change, you have several options: Leave the beans.xml file empty but add a bean defining annotation to all classes that need it. Leave the classes unchanged but change the beans.xml file from being empty to one with the following content: <beans bean-discovery-mode="all"></beans> Leave the application unchanged, but change the server's weld subsystem configuration to restore handling of empty beans.xml files back to the JBoss EAP 7 behavior. This setting affects all deployments on the server. For example, with the CLI: /subsystem=weld:write-attribute(name=legacy-empty-beans-xml-treatment,value=true) 4.2.2. CDI API Changes Jakarta Contexts and Dependency Injection 4.0 removed the following deprecated API elements: The javax.enterprise.inject.spi.Bean.isNullable() method has been removed. This method has always returned false for many years now, so applications that call it can replace the call with false or remove any branching logic and just retain the contents of the false branch. The javax.enterprise.inject.spi.BeanManager.createInjectionTarget(AnnotatedType) method has been removed. Replace this method call with with BeanManager.getInjectionTargetFactory(AnnotatedType) and use the returned factory to create injection targets. See Obtaining an InjectionTarget for a class in the Jakarta Contexts and Dependency injection specification for more information. The javax.enterprise.inject.spi.BeanManager.fireEvent(Object, Annotation) method has been removed. Use BeanManager.getEvent() as an entry point to a similar API. See Firing an event in the Jakarta Contexts and Dependency injection specification for more information. The javax.enterprise.inject.spi.BeforeBeanDiscovery.addAnnotatedType(AnnotatedType) method has been removed. If your application is calling this method, you can replace it with a call to BeforeBeanDiscovery.addAnnotatedType(AnnotatedType, (String) null) . 4.2.3. Jakarta Enterprise Beans Java SE 14 has removed the java.security.Identity class, so it's usage has been removed from the Jakarta Enterprise Beans 4.0 API. The deprecated javax.ejb.EJBContext.getCallerIdentity() method has been removed. You can use EJBContext.getCallerPrincipal() instead, which returns java.security.Principal . The deprecated javax.ejb.EJBContext.isCallerInRole(Identity role) method has been removed. You can use EJBContext.isCallerInRole(String roleName) instead. The Jakarta XML RPC specification has been removed from the Jakarta EE 10 Full Platform, so the javax.ejb.SessionContext.getMessageContext() method that returned javax.xml.rpc.handler.MessageContext has been removed. The Jakarta XML RPC specification was optional in Jakarta EE 8, and Red Hat JBoss EAP 7 does not support it. Any usage of this specification would have thrown an IllegalStateException , so this EJB API change is not expected to affect any existing applications running on JBoss EAP 7. The deprecated javax.ejb.EJBContext.getEnvironment() method has been removed. Use the JNDI naming context java:comp/env to access the enterprise bean's environment. 4.2.4. Jakarta Expression Language The incorrectly spelled javax.el.MethodExpression.isParmetersProvided() method has been removed. You can use MethodExpression.isParametersProvided() instead. 4.2.5. Jakarta JSON Binding By default, types annotated with the jakarta.json.bind.annotation.JsonbCreator annotation does not require all parameters to be available in the JSON content. Default values will be used if the JSON being parsed is missing one of the parameters. The EE 8 behavior that requires all the parameters to be present in the JSON can be turned on by calling jakarta.json.bind.JsonbConfig().withCreatorParametersRequired(true) . 4.2.6. Jakarta Faces The following deprecated functionality has been removed in Jakarta Faces 4.0. 4.2.6.1. Jakarta Faces and Java Server Pages Jakarta Server Pages (JSP) support is deprecated in Jakarta Faces 2.0 and later versions. JSP support is removed in Jakarta Faces 4.0. Facelets replaces JSP as the preferred View Definition Language (VDL). Applications using JSP for Faces views can be modified using Facelets. You can identify the applications by mapping FacesServlet to the *.jsp suffix in web.xml . 4.2.6.2. Faces Managed-Beans The deprecated Jakarta Faces-specific managed-bean concept has been removed in Faces 4.0, for Jakarta Contexts and Dependency Injection (CDI) beans. Applications using Faces managed-beans (i.e. classes annotated with javax.faces.bean.ManagedBean or referenced in a managed-bean element in faces-config.xml ) might need to make the following changes: Classes annotated with javax.faces.bean.ManagedBean or referenced in a managed-bean element in faces-config.xml should instead be annotated with jakarta.inject.Named , and any managed-bean element in faces-config.xml should be removed. Members annotated with the javax.faces.bean.ManagedProperty annotation should use jakarta.faces.annotation.ManagedProperty instead, along with the jakarta.inject.Inject annotation. To get a startup semantic similar to the old javax.faces.bean.ManagedBean(name="foo", eager=true) , add a public void xxx(@Observes jakarta.enterprise.event.Startup event) method or a public void xxx(@Observes @Initialized(ApplicationScoped.class) Object context) method. The jakarta.enterprise.event.Startup option is new in CDI 4.0. Use of the javax.faces.bean.ApplicationScoped annotation should be replaced with jakarta.enterprise.context.ApplicationScoped . Use of the javax.faces.bean.CustomScoped annotation should be replaced with CDI custom scopes and jakarta.enterprise.context.spi.Context . See Defining new scope types and The Context Interface in the CDI 4.0 specification for more details. Use of the javax.faces.bean.NoneScoped annotation should be replaced with jakarta.enterprise.context.Dependent , which is a CDI built-in scope with approximately similar semantics. Use of the javax.faces.bean.RequestScoped annotation should be replaced with jakarta.enterprise.context.RequestScoped . Use of the javax.faces.bean.SessionScoped annotation should be replaced with jakarta.enterprise.context.SessionScoped . 4.2.6.3. Other Faces API Changes The javax.faces.bean.ViewScoped annotation has been removed. You can use jakarta.faces.view.ViewScoped instead. The javax.faces.view.facelets.ResourceResolver and javax.faces.view.facelets.FaceletsResourceResolver annotations have been removed. For any ResourceResolvers in your application, implement the jakarta.faces.application.ResourceHandler interface and register the fully qualified class name of the implementation in the application/resource-handler element in faces-config.xml . 4.2.7. Jakarta Servlet Jakarta Servlet 6.0 removes a number API classes and methods that were deprecated in Servlet 5.0 and earlier, mostly in the Servlet 2.x releases. The javax.servlet.SingleThreadModel marker interface has been removed and servlets that implement this interface must remove the interface declaration and ensure that the servlet code properly guards state and other resource access against concurrent access. For example, by avoiding the usage of an instance variable or synchronizing the block of code accessing resources. However, it is recommended that developers do not synchronize the service method (or methods like doGet and doPost that it dispatches to) because of the detrimental effect of such synchronization on performance. The javax.servlet.http.HttpSessionContext interface has been removed, along with the javax.servlet.http.HttpSession.getSessionContext() method. There have been no use cases for this interface since Servlet 2.1 as its implementations were required by specifications not to provide any usable data. The javax.servlet.http.HttpUtils utility class has been removed. Applications should use the ServletRequest and HttpServletRequest interfaces instead of the following methods: parseQueryString(String s) and parsePostData(int len, ServletInputStream in) - Use ServletRequest.getParameterMap() . If an application needs to differentiate between query string parameters and request body parameters, the application must implement the code to do that by parsing the query string itself. getRequestURL(HttpServletRequest req) - Use HttpServletRequest.getRequestURL() . Also, the following miscellaneous methods and constructors have been removed: Class/Interface Removed Use Instead javax.servlet.ServletContext getServlet(String name) no replacement getServlets() no replacement getServletNames() no replacement log(Exception exception, String msg) log(String message, Throwable throwable) javax.servlet.ServletRequest getRealPath(String path) ServletContext.getRealPath(String path) javax.servlet.ServletRequestWrapper getRealPath(String path) ServletContext.getRealPath(String path) javax.servlet.UnavailableException getServlet() no replacement UnavailableException(Servlet servlet, String msg) UnavailableException(String) UnavailableException(int seconds, Servlet servlet, String msg) UnavailableException(String, int) javax.servlet.http.HttpServletRequest isRequestedSessionIdFromUrl() isRequestedSessionIdFromURL() javax.servlet.http.HttpServletRequestWrapper isRequestedSessionIdFromUrl() isRequestedSessionIdFromURL() javax.servlet.http.HttpServletResponse encodeUrl(String url) encodeURL(String url) encodeRedirectUrl(String url) encodeRedirectURL(String url) setStatus(int sc, String sm) sendError(int, String) javax.servlet.http.HttpServletResponseWrapper encodeUrl(String url) encodeURL(String url) encodeRedirectUrl(String url) encodeRedirectURL(String url) setStatus(int sc, String sm) sendError(int, String) javax.servlet.http.HttpSession getValue(String name) getAttribute(String name) getValueNames() getAttributeNames() putValue(String name, Object value) setAttribute(String name, Object value) removeValue(String name) removeAttribute(String name) 4.2.8. Jakarta Soap with Attachments Support for provider lookup through a jaxm.properties file has been removed. The deprecated javax.xml.soap.SOAPElementFactory class has been removed. Use jakarta.xml.soap.SOAPFactory for creating SOAPElements. SOAPElementFactory method SOAPFactory equivalent newInstance() newInstance() create(Name) createElement(Name) create(String) createElement(String) create(String, String, String) createElement(String, String, String) 4.2.9. Jakarta XML Binding The XML namespace that should be used in xml binding files has changed. The http://java.sun.com/xml/ns/jaxb namespace should be replaced with https://jakarta.ee/xml/ns/jaxb . The deprecated javax.xml.bind.Validator interface has been removed, as has the associated javax.xml.bind.JAXBContext.createValidator() method. To validate marshalling and unmarshalling operations, provide a javax.xml.validation.Schema to jakarta.xml.bind.Marshaller.setSchema(Schema) . Support for compatibility with JAXB 1.0 has been removed. Some of the deprecated steps in the JAXBContext implementation lookup algorithm have been removed. Searches for implementation class names through jaxb.properties files, javax.xml.bind.context.factory or jakarta.xml.bind.JAXBContext properties and /META-INF/services/javax.xml.bind.JAXBContext resource files have been dropped. For more informatoin about the current implementation discovery algorithm, see the Jakarta XML Binding 4.0 specification . The generic requirements for a number of methods in the javax.xml.bind.Marshaller interface have changed as follows: Jakarta XML Binding 2.3 / 3.0 Jakarta XML Binding 4.0 <A extends XmlAdapter> void setAdapter(A adapter) <A extends XmlAdapter<?, ?>> void setAdapter(A adapter) <A extends XmlAdapter> void setAdapter(Class<A> type, A adapter) <A extends XmlAdapter<?, ?>> void setAdapter(Class<A> type, A adapter) <A extends XmlAdapter> A getAdapter(Class<A> type) <A extends XmlAdapter<?, ?>> A getAdapter(Class<A> type) Apart from the changes in the Jakarta XML Binding API, there have been significant package name changes in the implementation library EAP 8, which might affect some applications that access the implementation library directly: Any use of classes in the com.sun.xml.bind package should be replaced by classes in the org.glassfish.jaxb.runtime package. Classes in sub-packages of com.sun.xml.bind should be replaced with classes in corresponding org.glassfish.jaxb.runtime sub-packages. For jakarta.xml.bind.Marshaller property settings, change the property constant name from com.sun.xml.bind.* to org.glassfish.jaxb.* . For example, marshaller.setProperty("com.sun.xml.bind.namespacePrefixMapper", mapper) becomes marshaller.setProperty("org.glassfish.jaxb.namespacePrefixMapper", mapper) .
null
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/migration_guide/application-migration-from-jakarta-ee-8-to-ee-10_default
3.3. Performance Co-Pilot (PCP)
3.3. Performance Co-Pilot (PCP) Red Hat Enterprise Linux 6 introduces support for Performance Co-Pilot ( PCP ), a suite of tools, services, and libraries for acquiring, storing, and analyzing system-level performance measurements. Its light-weight distributed architecture makes it particularly well-suited for centralized analysis of complex systems. Performance metrics can be added using the Python, Perl, C++, and C interfaces. Analysis tools can use the client APIs (Python, C++, C) directly, and rich web applications can explore all available performance data using a JSON interface. The Performance Co-Pilot Collection Daemon ( pmcd ) is responsible for collecting performance data on the host system, and various client tools, such as pminfo or pmstat , can be used to retrieve, display, archive, and process this data on the same host or over the network. The pcp package provides the command-line tools and underlying functionality. The graphical tool also requires the pcp-gui package. Resources For information on PCP, see the Index of Performance Co-Pilot (PCP) articles, solutions, tutorials and white papers on the Red Hat Customer Portal. The manual page named PCPIntro serves as an introduction to Performance Co-Pilot. It provides a list of available tools as well as a description of available configuration options and a list of related manual pages. By default, comprehensive documentation is installed in the /usr/share/doc/pcp-doc/ directory, notably the Performance Co-Pilot User's and Administrator's Guide and Performance Co-Pilot Programmer's Guide . If you need to determine what PCP tool has the functionality of an older tool you are already familiar with, see the Side-by-side comparison of PCP tools with legacy tools Red Hat Knowledgebase article. See the official PCP documentation for an in-depth description of the Performance Co-Pilot and its usage. If you want to start using PCP on Red Hat Enterprise Linux quickly, see the PCP Quick Reference Guide . The official PCP website also contains a list of frequently asked questions . Overview of System Services and Tools Provided by PCP Performance Co-Pilot (PCP) provides a large number of command-line tools, graphical tools, and libraries. For more information on these tools, see their respective manual pages. Table 3.1. System Services Distributed with Performance Co-Pilot in Red Hat Enterprise Linux 6 Name Description pmcd The Performance Co-Pilot Collection Daemon (PMCD). pmie The Performance Metrics Inference Engine. pmlogger The performance metrics logger. pmmgr Manages a collection of PCP daemons for a set of discovered local and remote hosts running the Performance Co-Pilot Collection Daemon (PMCD) according to zero or more configuration directories. pmproxy The Performance Co-Pilot Collection Daemon (PMCD) proxy server. pmwebd Binds a subset of the Performance Co-Pilot client API to RESTful web applications using the HTTP protocol. Table 3.2. Tools Distributed with Performance Co-Pilot in Red Hat Enterprise Linux 6 Name Description pcp Displays the current status of a Performance Co-Pilot installation. pmatop Shows the system-level occupation of the most critical hardware resources from the performance point of view: CPU, memory, disk, and network. pmchart Plots performance metrics values available through the facilities of the Performance Co-Pilot. pmclient Displays high-level system performance metrics by using the Performance Metrics Application Programming Interface (PMAPI). pmcollectl Collects and displays system-level data, either from a live system or from a Performance Co-Pilot archive file. pmdbg Displays available Performance Co-Pilot debug control flags and their values. pmdiff Compares the average values for every metric in either one or two archives, in a given time window, for changes that are likely to be of interest when searching for performance regressions. pmdumplog Displays control, metadata, index, and state information from a Performance Co-Pilot archive file. pmdumptext Outputs the values of performance metrics collected live or from a Performance Co-Pilot archive. pmerr Displays available Performance Co-Pilot error codes and their corresponding error messages. pmfind Finds PCP services on the network. pmie An inference engine that periodically evaluates a set of arithmetic, logical, and rule expressions. The metrics are collected either from a live system, or from a Performance Co-Pilot archive file. pmieconf Displays or sets configurable pmie variables. pminfo Displays information about performance metrics. The metrics are collected either from a live system, or from a Performance Co-Pilot archive file. pmiostat Reports I/O statistics for SCSI devices (by default) or device-mapper devices (with the -x dm option). pmlc Interactively configures active pmlogger instances. pmlogcheck Identifies invalid data in a Performance Co-Pilot archive file. pmlogconf Creates and modifies a pmlogger configuration file. pmloglabel Verifies, modifies, or repairs the label of a Performance Co-Pilot archive file. pmlogsummary Calculates statistical information about performance metrics stored in a Performance Co-Pilot archive file. pmprobe Determines the availability of performance metrics. pmrep Reports on selected, easily customizable, performance metrics values. pmsocks Allows access to a Performance Co-Pilot hosts through a firewall. pmstat Periodically displays a brief summary of system performance. pmstore Modifies the values of performance metrics. pmtrace Provides a command line interface to the trace Performance Metrics Domain Agent (PMDA). pmval Displays the current value of a performance metric.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/performance_tuning_guide/s-analyzeperf-pcp
Chapter 18. Knowledge worker tasks in Business Central
Chapter 18. Knowledge worker tasks in Business Central A task is a part of the business process flow that a given user can claim and perform. You can handle tasks in Menu Track Task Inbox in Business Central. It displays the task list for the logged-in user. A task can be assigned to a particular user, multiple users, or to a group of users. If a task is assigned to multiple users or a group of users, it is visible in the task lists of all the users and any user can claim the task. When a task is claimed by a user, it is removed from the task list of other users. 18.1. Starting a task You can start user tasks in Menu Manage Tasks and in Menu Track Task Inbox in Business Central. Note Ensure that you are logged in and have appropriate permissions for starting and stopping tasks. Procedure In Business Central, go to Menu Track Task Inbox . On the Task Inbox page, click the task to open it. On the Work tab of the task page, click Start . Once you start a task, its status changes to InProgress . You can view the status of tasks on the Task Inbox as well as on the Manage Tasks page. Note Only users with the process-admin role can view the task list on the Manage Tasks page. Users with the admin role can access the Manage Tasks page, however they see only an empty task list. 18.2. Stopping a task You can stop user tasks from the Tasks and Task Inbox page. Procedure In Business Central, go to Menu Track Task Inbox . On the Task Inbox page, click the task to open it. On the Work tab of the task page, click Complete . 18.3. Delegating a task After tasks are created in Business Central, you can delegate them to others. Note A user assigned with any role can delegate, claim, or release tasks visible to the user. On the Task Inbox page, the Actual Owner column displays the name of the current owner of the task. Procedure In Business Central, go to Menu Track Task Inbox . On the Task Inbox page, click the task to open it. On the task page, click the Assignments tab. In the User field, enter the name of the user or group you want to delegate the task to. Click Delegate . Once a task is delegated, the owner of the task changes. 18.4. Claiming a task After tasks are created in Business Central, you can claim the released tasks. A user can claim a task from the Task Inbox page only if the task is assigned to a group the user belongs to. Procedure In Business Central, go to Menu Track Task Inbox . On the Task Inbox page, click the task to open it. On the Work tab of the task page, click Claim . To claim the released task from the Task Inbox page, do any of the following tasks: Click Claim from the three dots in the Actions column. Click Claim and Work from the three dots in the Actions column to open, view, and modify the details of a task. The user who claims a task becomes the owner of the task. 18.5. Releasing a task After tasks are created in Business Central, you can release your tasks for others to claim. Procedure In Business Central, go to Menu Track Task Inbox . On the Task Inbox page, click the task to open it. On the task page, click Release . A released task has no owner. 18.6. Bulk actions on tasks In the Tasks and Task Inbox pages in Business Central, you can perform bulk actions over multiple tasks in a single operation. Note If a specified bulk action is not permitted based on the task status, a notification is displayed and the operation is not executed on that particular task. 18.6.1. Claiming tasks in bulk After you create tasks in Business Central, you can claim the available tasks in bulk. Procedure In Business Central, complete one of the following steps: To view the Task Inbox page, select Menu Track Task Inbox . To view the Tasks page, select Menu Manage Tasks . To claim the tasks in bulk, on the Task Inbox page or the Manage Tasks page, select two or more tasks from the Task table. From the Bulk Actions drop-down list, select Bulk Claim . To confirm, click Claim on the Claim selected tasks window. For each task selected, a notification is displayed showing the result. 18.6.2. Releasing tasks in bulk You can release your owned tasks in bulk for others to claim. Procedure In Business Central, complete one of the following steps: To view the Task Inbox page, select Menu Track Task Inbox . To view the Tasks page, select Menu Manage Tasks . To release the tasks in bulk, on the Task Inbox page or the Manage Tasks page, select two or more tasks from the Task table. From the Bulk Actions drop-down list, select Bulk Release . To confirm, click Release on the Release selected tasks window. For each task selected, a notification is displayed showing the result. 18.6.3. Resuming tasks in bulk If there are suspended tasks in Business Central, you can resume them in bulk. Procedure In Business Central, complete one of the following steps: To view the Task Inbox page, select Menu Track Task Inbox . To view the Tasks page, select Menu Manage Tasks . To resume the tasks in bulk, on the Task Inbox page or the Manage Tasks page, select two or more tasks from the Task table. From the Bulk Actions drop-down list, select Bulk Resume . To confirm, click Resume on the Resume selected tasks window. For each task selected, a notification is displayed showing the result. 18.6.4. Suspending tasks in bulk After you create tasks in Business Central, you can suspend the tasks in bulk. Procedure In Business Central, complete one of the following steps: To view the Task Inbox page, select Menu Track Task Inbox . To view the Tasks page, select Menu Manage Tasks . To suspend the tasks in bulk, on the Task Inbox page or the Manage Tasks page, select two or more tasks from the Task table. From the Bulk Actions drop-down list, select Bulk Suspend . To confirm, click Suspend on the Suspend selected tasks window. For each task selected, a notification is displayed showing the result. 18.6.5. Reassigning tasks in bulk After you create tasks in Business Central, you can reassign your tasks in bulk and delegate them to others. Procedure In Business Central, complete one of the following steps: To view the Task Inbox page, select Menu Track Task Inbox . To view the Tasks page, select Menu Manage Tasks . To reassign the tasks in bulk, on the Task Inbox page or the Manage Tasks page, select two or more tasks from the Task table. From the Bulk Actions drop-down list, select Bulk Reassign . In the Tasks reassignment window, enter the user ID of the user to whom you want to reassign the tasks. Click Delegate . For each task selected, a notification is displayed showing the result.
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_process_services_in_red_hat_process_automation_manager/interacting-with-processes-knowledge-worker-tasks-con
Chapter 3. Using the OpenShift Container Platform dashboard to get cluster information
Chapter 3. Using the OpenShift Container Platform dashboard to get cluster information The OpenShift Container Platform web console captures high-level information about the cluster. 3.1. About the OpenShift Container Platform dashboards page Access the OpenShift Container Platform dashboard, which captures high-level information about the cluster, by navigating to Home Overview from the OpenShift Container Platform web console. The OpenShift Container Platform dashboard provides various cluster information, captured in individual dashboard cards. The OpenShift Container Platform dashboard consists of the following cards: Details provides a brief overview of informational cluster details. Status include ok , error , warning , in progress , and unknown . Resources can add custom status names. Cluster ID Provider Version Cluster Inventory details number of resources and associated statuses. It is helpful when intervention is required to resolve problems, including information about: Number of nodes Number of pods Persistent storage volume claims Bare metal hosts in the cluster, listed according to their state (only available in metal3 environment) Status helps administrators understand how cluster resources are consumed. Click on a resource to jump to a detailed page listing pods and nodes that consume the largest amount of the specified cluster resource (CPU, memory, or storage). Cluster Utilization shows the capacity of various resources over a specified period of time, to help administrators understand the scale and frequency of high resource consumption, including information about: CPU time Memory allocation Storage consumed Network resources consumed Pod count Activity lists messages related to recent activity in the cluster, such as pod creation or virtual machine migration to another host. 3.2. Recognizing resource and project limits and quotas You can view a graphical representation of available resources in the Topology view of the web console Developer perspective. If a resource has a message about resource limitations or quotas being reached, a yellow border appears around the resource name. Click the resource to open a side panel to see the message. If the Topology view has been zoomed out, a yellow dot indicates that a message is available. If you are using List View from the View Shortcuts menu, resources appear as a list. The Alerts column indicates if a message is available.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/web_console/using-dashboard-to-get-cluster-info
Chapter 6. Premigration checklists
Chapter 6. Premigration checklists Before you migrate your application workloads with the Migration Toolkit for Containers (MTC), review the following checklists. 6.1. Cluster health checklist ❏ The clusters meet the minimum hardware requirements for the specific platform and installation method, for example, on bare metal . ❏ All MTC prerequisites are met. ❏ All nodes have an active OpenShift Container Platform subscription. ❏ You have verified node health . ❏ The identity provider is working. ❏ The migration network has a minimum throughput of 10 Gbps. ❏ The clusters have sufficient resources for migration. Note Clusters require additional memory, CPUs, and storage in order to run a migration on top of normal workloads. Actual resource requirements depend on the number of Kubernetes resources being migrated in a single migration plan. You must test migrations in a non-production environment in order to estimate the resource requirements. ❏ The etcd disk performance of the clusters has been checked with fio . 6.2. Source cluster checklist ❏ You have checked for persistent volumes (PVs) with abnormal configurations stuck in a Terminating state by running the following command: USD oc get pv ❏ You have checked for pods whose status is other than Running or Completed by running the following command: USD oc get pods --all-namespaces | egrep -v 'Running | Completed' ❏ You have checked for pods with a high restart count by running the following command: USD oc get pods --all-namespaces --field-selector=status.phase=Running \ -o json | jq '.items[]|select(any( .status.containerStatuses[]; \ .restartCount > 3))|.metadata.name' Even if the pods are in a Running state, a high restart count might indicate underlying problems. ❏ The cluster certificates are valid for the duration of the migration process. ❏ You have checked for pending certificate-signing requests by running the following command: USD oc get csr -A | grep pending -i ❏ The registry uses a recommended storage type . ❏ You can read and write images to the registry. ❏ The etcd cluster is healthy. ❏ The average API server response time on the source cluster is less than 50 ms. 6.3. Target cluster checklist ❏ The cluster has the correct network configuration and permissions to access external services, for example, databases, source code repositories, container image registries, and CI/CD tools. ❏ External applications and services that use services provided by the cluster have the correct network configuration and permissions to access the cluster. ❏ Internal container image dependencies are met. ❏ The target cluster and the replication repository have sufficient storage space.
[ "oc get pv", "oc get pods --all-namespaces | egrep -v 'Running | Completed'", "oc get pods --all-namespaces --field-selector=status.phase=Running -o json | jq '.items[]|select(any( .status.containerStatuses[]; .restartCount > 3))|.metadata.name'", "oc get csr -A | grep pending -i" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/migration_toolkit_for_containers/premigration-checklists-mtc
Chapter 3. Notable Bug Fixes
Chapter 3. Notable Bug Fixes This chapter describes bugs fixed in this release of Red Hat Gluster Storage that have significant impact on users. Note Bugzilla IDs that are not hyperlinked are private bugs that have potentially sensitive data attached. Security Fixes CVE-2019-10197 (Moderate) A combination of parameters and permissions could allow user to escape from the share path definition. General Fixes BZ#1578703 Previously, running gluster volume status <volname> inode output the entire inode table, which could time out and create performance issues. The output of this command is now more streamlined, and the original information should now be obtained by performing a statedump. BZ#1734423 , BZ#1736830 , BZ#1737674 Previously, dynamically allocated memory was not freed correctly, which led to an increase in memory consumption and out-of-memory management on gluster clients. Memory is now freed correctly so that memory overruns do not occur. BZ#1676468 Previously, glusterfs enabled kernel auto-invalidation, which invalidates page cache when ctime changes. This meant that whenever writes occurred before, during, and after a ctime change, the page cache was purged, and the performance of subsequent writes did not benefit from caching. Two new options are now available to improve performance. The mount option auto-invalidation[=on|off] is now enabled by default, and specifies whether the kernel can automatically invalidate attribute, dentry, and page cache. To retain page cache after writes, set this to 'off', but only if files cannot be accessed by two different mount points concurrently. The volume option performance.global-cache-invalidation=[on|off] overrides the value of performance.cache-invalidation . This option is disabled by default, but when enabled purges all read caches related to gluster when a stat change is detected. Turn this option on only when a file can be accessed from different mount points and caches across these mount points are required to be coherent. If both options are turned off, data written is retained in page cache and performance of overlapping reads in the same region improves. BZ#1726991 Brick status was displayed as started when the brick was in a starting or stopping state because the get-status operation only tracked the started and stopped states. The get-status operation now reports state more accurately. BZ#1720192 When a gluster volume has a bind-address specified, the name of the rebalance socket file becomes greater than the allowed character length, which prevents rebalance from starting. A hash is now generated based on the volume name and UUID, avoiding this issue. BZ#1670415 , BZ#1686255 A small memory leak that occurred when viewing the status of all volumes has been fixed. BZ#1652461 If a user configured more than 1500 volumes in a 3 node cluster, and a node or glusterd service became unavailable, then during reconnection there was too much volume information to gather before the handshake process timed out. This issue is resolved by adding several optimizations to the volume information gathering process. BZ#1058032 Previously, while migrating a virtual machine, libvirt changed ownership of the machine image if it detected that the image was on a shared file system. This prevented virtual machines from accessing the image. This issue can no longer be reproduced. BZ#1685246 Access Control List settings were not being removed from Red Hat Gluster Storage volumes because the removexattr system call was not being passed on to the brick process. This has been corrected and attributes are now removed as expected. Fixes for Dispersed Volumes BZ#1732774 If a file on a bad brick was being healed while a write request for that file was being performed, the read that occurs during a write operation could still read the file from the bad brick. This could lead to corruption of data on good bricks. All reads are now done from good bricks only, avoiding this issue. BZ#1706549 When bricks are down, files can still be modified using the O_TRUNC flag. When bricks function again, any operation that modified the file using file descriptor starts open-fd heal. Previously, when open-fd heal was performed on a file that was opened using O_TRUNC , a truncate operation was triggered on the file. Because the truncate operation usually happened as part of an operation that already took a lock, it did not take an explicit lock, which in this case led to a NULL lock structure, and eventually led to a crash when the NULL lock structure was de-referenced. The O_TRUNC flag is now ignored during an open-fd heal, and a truncate operation occurs during the data heal of a file, avoiding this issue. BZ#1745107 Previously, when an update to a file's size or version failed, the file descriptor was not marked as bad. This meant that bricks were assumed to be good when this was not necessarily true and that the file could show incorrect data. This update ensures that the file descriptor is marked as bad with the change file sync or flush fails after an update failure. Fixes for Distributed Volumes BZ#1672869 Previously, when parallel-readdir was enabled, stale linkto files could not be deleted because they were incorrectly interpreted as data files. Stale linkto files are now correctly identified. Fixes for Events BZ#1732443 Previously, the network family was not set correctly during events socket initialization. This resulted in an invalid argument error and meant that events were not sent to consumers. Network family is now set correctly and events work as expected. Fixes for automation with gdeploy BZ#1759810 The configuration options group=samba and user.cifs=enable are now set on the volume during Samba setup via gdeploy, ensuring setup is successful. BZ#1712904 Previously, when samba was configured using gdeploy, the samba user was not created on all nodes in a cluster. This caused problems during failover of CTDB, as the required user did not exist. gdeploy now creates the samba user on all nodes, avoiding this issue. Fixes for Geo-replication BZ#1708116 During geo-replication, when a sync was attempted for a large number of files that had been unlinked and no longer existed on master, the tarssh process hung because of a deadlock. When the stderr buffer of the tar process filled before tar completed, it hung. Workers expected tar to complete before reading stderr, but tar could not complete until the buffer was freed by being read. Workers now begin reading stderr output as soon as the tar process is created, avoiding the issue. BZ#1708121 Geo-replication now synchronizes correctly instead of creating additional files when a large number of different files have been created and renamed to the same destination path. BZ#1712591 In non-root geo-replication sessions, gluster binary paths were not added to PATH variable, which meant that gluster commands were not available to the session. Existing gluster-command-dir and gluster-command-slave-dir options can be used to ensure that sessions have access to gluster commands. BZ#1670429 Geo-replication now succeeds when a symbolic link is renamed multiple times between syncs. Fixes for NFS-Ganesha BZ#1728588 A race condition existed where, when attempting to re-establish a connection with an NFS client, the server did not clean up existing state in time. This led to the new connection being incorrectly identified as having expired, rendering the mount point inaccessible. State is now cleaned before a new connection is accepted so this issue no longer occurs. BZ#1751210 NFS-Ganesha used client credentials for all operations on Gluster storage. In cases where a non-root user was operating on a read-only file, this resulted in 'permission denied' errors. Root permissions are now used where appropriate so that non-root users are able to create and write to files using 0444 mode. Fixes for Replication BZ#1688395 When eager-lock lock acquisition failed during a write transaction, the lock was retained, which blocked all subsequent writes and caused a hang. This is now handled correctly and more specific log messages have been added to assist in diagnosing related issues. BZ#1642425 The cluster.quorum-count volume option was not being updated in the volume configuration file for Gluster NFS volumes because when the last part of the file read is smaller than the buffer size, the data written from the buffer was a combination of new and old data. This has been corrected and Gluster NFS clients now honor cluster.quorum-count when cluster.quorum-type is set to fixed . Fixes for Sharding BZ#1568758 Deleting a file with a large number of shards timed out because unlink operations occurred on all shards in parallel, which led to contention on the .shard directory. Timeouts resulted in failed deletions and stale shards remaining in the .shard directory. Shard deletion is now a background process that deletes one batch of shards at a time, to control contention on the .shard directory and prevent timeouts. The size of shard deletion batches is controlled with the features.shard-deletion-rate option, which is set to 100 by default. Fixes for Web Administration BZ#1645428 The previously shipped version of the python2-pyasn1 package caused IPA client installation to fail. This package is replaced with updates to tendrl-notifier and tendrl-commons so that pysnmp is used instead of python2-pyasn1 , and installation works as expected. Before upgrading to Red Hat Gluster Storage Web Administration 3.5, remove the python2-pyasn1 and pysnmp packages (but not their dependencies) by running the following commands: BZ#1647322 Previously, tendrl did not set an owner for the /var/lib/carbon/whisper/tendrl directory. When the owner of this directory was not the carbon user, carbon-cache could not create whisper files in this location. Tendrl now ensures the directory is owned by the carbon user to ensure whisper files can be created. BZ#1688630 Previously, errors that occurred because tendrl-monitoring-integration was not running were reported with generic error messages. More specific error messages about tendrl-monitoring-integration status is now logged in this situation. BZ#1645221 Previously, Red Hat Gluster Storage web administration expected all nodes to be online before any node could stop being managed by web administration. It is now possible to remove a node from being managed even when one or more nodes in the cluster are not online. BZ#1666386 Red Hat Gluster Storage web administration previously received all split brain related events and displayed these as errors in the user interface, even when they were part of correctly operating heal processes. Events are now filtered based on the client identifier to remove unnecessary and erroneous errors from the user interface. BZ#1687333 Previously, when all nodes in a cluster were offline, the web administration interface did not report the correct number of nodes offline. Node status is now correctly tracked and reported. BZ#1686888 The node-agent service is responsible for import and remove (stop managing) operations. These operations timed out with a generic log message when the node-agent service was not running. This issue is now logged more clearly when it occurs. BZ#1702412 Previously, Ansible 2.8 compatibility did not work correctly. Red Hat Storage Web Administration is now compatible with Ansible 2.8.
[ "rpm -e --nodeps USD(rpm -qa 'python2-pyasn1') rpm -e --nodeps USD(rpm -qa 'pysnmp')" ]
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/3.5_release_notes/chap-documentation-3.5_release_notes-bug_fixes
Chapter 2. Managing physical device drives using the Web Console
Chapter 2. Managing physical device drives using the Web Console 2.1. Creating a partition table using the Web Console Follow these steps to create a new partition table on a drive using the Web Console. Log in to the Web Console. Click the hostname Storage . Click any drive under Drives . The Drive Overview page opens. Click Create partition table . Figure 2.1. Drive Content The Format device window opens. Specify whether to Erase existing data completely by overwriting it with zeroes. Specify the Partitioning style to use. Click Format . 2.2. Formatting a disk partition using the Web Console Follow these steps to format a partition with a file system using the Web Console. Log in to the Web Console. Click the hostname Storage . Click any drive under Drives . The Drive Overview page opens. Click the device under Content . In the Filesystem subtab, click Format . The Filesystem subtab The Format Device window appears. Specify whether to Erase existing data completely by overwriting it with zeroes. Specify the file system Type to use. Specify a Name for the file system. Specify whether to use default or customized Mounting behavior. If you selected Custom , specify a Mount Point and check any Mount options you want this file system to use. Click Format .
null
https://docs.redhat.com/en/documentation/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/managing_red_hat_gluster_storage_using_the_web_console/assembly-cockpit-mgmt-disk_drive
Chapter 1. Introduction to Python
Chapter 1. Introduction to Python Python is a high-level programming language that supports multiple programming paradigms, such as object-oriented, imperative, functional, and procedural paradigms. Python has dynamic semantics and can be used for general-purpose programming. With Red Hat Enterprise Linux, many packages that are installed on the system, such as packages providing system tools, tools for data analysis, or web applications, are written in Python. To use these packages, you must have the python* packages installed. 1.1. Python versions Python 3.9 is the default Python implementation in RHEL 9. Python 3.9 is distributed in a non-modular python3 RPM package in the BaseOS repository and is usually installed by default. Python 3.9 will be supported for the whole life cycle of RHEL 9. Additional versions of Python 3 are distributed as non-modular RPM packages with a shorter life cycle through the AppStream repository in minor RHEL 9 releases. You can install these additional Python 3 versions in parallel with Python 3.9. Python 2 is not distributed with RHEL 9. Table 1.1. Python versions in RHEL 9 Version Package to install Command examples Available since Life cycle Python 3.9 python3 python3 , pip3 RHEL 9.0 full RHEL 9 Python 3.11 python3.11 python3.11 , pip3.11 RHEL 9.2 shorter Python 3.12 python3.12 python3.12 , pip3.12 RHEL 9.4 shorter For details about the length of support, see Red Hat Enterprise Linux Life Cycle and Red Hat Enterprise Linux Application Streams Life Cycle . 1.2. Major differences in the Python ecosystem since RHEL 8 The following are the major changes in the Python ecosystem in RHEL 9 compared to RHEL 8: The unversioned python command The unversioned form of the python command ( /usr/bin/python ) is available in the python-unversioned-command package. On some systems, this package is not installed by default. To install the unversioned form of the python command manually, use the dnf install /usr/bin/python command. In RHEL 9, the unversioned form of the python command points to the default Python 3.9 version and it is an equivalent to the python3 and python3.9 commands. In RHEL 9, you cannot configure the unversioned command to point to a different version than Python 3.9 . The python command is intended for interactive sessions. In production, it is recommended to use python3 , python3.9 , python3.11 , or python3.12 explicitly. You can uninstall the unversioned python command by using the dnf remove /usr/bin/python command. If you need a different python or python3 command, you can create custom symlinks in /usr/local/bin or ~/.local/bin , or use a Python virtual environment. Several other unversioned commands are available, such as /usr/bin/pip in the python3-pip package. In RHEL 9, all unversioned commands point to the default Python 3.9 version. Architecture-specific Python wheels Architecture-specific Python wheels built on RHEL 9 newly adhere to the upstream architecture naming, which allows customers to build their Python wheels on RHEL 9 and install them on non-RHEL systems. Python wheels built on releases of RHEL are compatible with later versions and can be installed on RHEL 9. Note that this affects only wheels containing Python extensions, which are built for each architecture, not Python wheels with pure Python code, which is not architecture-specific.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/installing_and_using_dynamic_programming_languages/assembly_introduction-to-python_installing-and-using-dynamic-programming-languages
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/private_automation_hub_life_cycle/making-open-source-more-inclusive
Chapter 3. alt-java and java uses
Chapter 3. alt-java and java uses Depending on your needs, you can use either the alt-java binary or the java binary to run your application's code. 3.1. alt-java usage Use alt-java for any applications that run untrusted code. Be aware that using alt-java is not a solution to all speculative execution vulnerabilities. 3.2. java usage Use the java binary for performance-critical tasks in a secure environment. Most RPMs in a Red Hat Enterprise Linux system use the java binary, except for IcedTea-Web. IcedTea-Web uses alt-java as its launcher, so you can use IcedTea-Web to run untrusted code. Additional resources See Java and Speculative Execution Vulnerabilities .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/using_alt-java/using-java-and-altjava
Chapter 5. Configuring iPXE to Reduce Provisioning Times
Chapter 5. Configuring iPXE to Reduce Provisioning Times You can use Satellite to configure PXELinux to chainboot iPXE in BIOS mode and boot using the HTTP protocol if you have the following restrictions that prevent you from using PXE: A network with unmanaged DHCP servers. A PXE service that is blacklisted on your network or restricted by a firewall. An unreliable TFTP UDP-based protocol because of, for example, a low-bandwidth network. For more information about iPXE support, see Supported architectures for provisioning article. iPXE Overview iPXE is an open source network boot firmware. It provides a full PXE implementation enhanced with additional features, including booting from HTTP server. For more information, see ipxe.org . There are three methods of using iPXE with Red Hat Satellite: Booting virtual machines using hypervisors that use iPXE as primary firmware. Using PXELinux through TFTP to chainload iPXE directly on bare metal hosts. Using PXELinux through UNDI, which uses HTTP to transfer the kernel and the initial RAM disk on bare-metal hosts. Security Information The iPXE binary in Red Hat Enterprise Linux is built without some security features. For this reason, you can only use HTTP, and cannot use HTTPS. All security-related features of iPXE in Red Hat Enterprise Linux are not supported. For more information, see Red Hat Enterprise Linux HTTPS support in iPXE . Prerequisites A host exists on Red Hat Satellite to use. The MAC address of the provisioning interface matches the host configuration. The provisioning interface of the host has a valid DHCP reservation. The NIC is capable of PXE booting. For more information, see supported hardware on ipxe.org for a list of hardware drivers expected to work with an iPXE-based boot disk. The NIC is compatible with iPXE. To prepare iPXE environment, you must perform this procedure on all Capsules. Procedure Enable the tftp and httpboot services: Install the ipxe-bootimgs package: Correct the SELinux file contexts: Copy the iPXE firmware with the Linux kernel header to the TFTP directory: Copy the UNDI iPXE firmware to the TFTP directory: Optionally, configure Foreman discovery. For more information, see Chapter 7, Configuring the Discovery Service . In the Satellite web UI, navigate to Administer > Settings , and click the Provisioning tab. Locate the Default PXE global template entry row and in the Value column, change the value to discovery . 5.1. Booting Virtual Machines Some virtualization hypervisors use iPXE as primary firmware for PXE booting. Because of this, you can boot virtual machines without TFTP and PXELinux. Chainbooting virtual machine workflow Using virtualization hypervisors removes the need for TFTP and PXELinux. It has the following workflow: Virtual machine starts iPXE retrieves the network credentials, including an HTTP URL, using DHCP iPXE loads the iPXE bootstrap template from Satellite Server or Capsule iPXE loads the iPXE template with MAC as a URL parameter from Satellite Server or Capsule iPXE loads the kernel and initial RAM disk of the installer Prerequisites Ensure that the hypervisor that you want to use supports iPXE. The following virtualization hypervisors support iPXE: libvirt Red Hat Virtualization RHEV (deprecated) If you want to use Capsule Servers instead of your Satellite Server, ensure that you have configured your Capsule Servers accordingly. For more information, see Configuring Capsule for Host Registration and Provisioning in Installing Capsule Server . Configuring Satellite Server to use iPXE You can use the default template to configure iPXE booting for hosts. If you want to change the default values in the template, clone the template and edit the clone. Procedure In the Satellite web UI, navigate to Hosts > Provisioning Templates , enter Kickstart default iPXE and click Search . Optional: If you want to change the template, click Clone , enter a unique name, and click Submit . Click the name of the template you want to use. If you clone the template, you can make changes you require on the Template tab. Click the Association tab, and select the operating systems that your host uses. Click the Locations tab, and add the location where the host resides. Click the Organizations tab, and add the organization that the host belongs to. Click Submit to save the changes. In the Satellite web UI, navigate to Hosts > Operating systems and select the operating system of your host. Click the Templates tab. From the iPXE Template list, select the template you want to use. Click Submit to save the changes. In the Satellite web UI, navigate to Hosts > All Hosts . In the Hosts page, select the host that you want to use. Select the Operating System tab. Set PXE Loader to iPXE Embedded . Select the Templates tab. From the iPXE template list, select Review to verify that the Kickstart default iPXE template is the correct template. Set the HTTP URL. If you want to use Satellite Server for booting, run the following command on Satellite Server: If you want to use Capsule for booting, run the following command on Capsule: 5.2. Chainbooting iPXE from PXELinux Use this procedure to set up iPXE to use a built-in driver for network communication or UNDI interface. To use HTTP with iPXE, use iPXE build with built-in drivers ( ipxe.lkrn ). Universal Network Device Interface (UNDI) is a minimalistic UDP/IP stack that implements TFTP client, however, cannot support other protocols like HTTP ( undionly-ipxe.0 ). You can choose to either load ipxe.lkrn or undionly-ipxe.0 file depending on the networking hardware capabilities and iPXE driver availability. Chainbooting iPXE directly or with UNDI workflow Host powers on PXE driver retrieves the network credentials using DHCP PXE driver retrieves the PXELinux firmware pxelinux.0 using TFTP PXELinux searches for the configuration file on the TFTP server PXELinux chainloads iPXE ipxe.lkrn or undionly-ipxe.0 iPXE retrieves the network credentials, including an HTTP URL, using DHCP again iPXE chainloads the iPXE template from the template Capsule iPXE loads the kernel and initial RAM disk of the installer Prerequisite If you want to use Capsule Servers instead of your Satellite Server, ensure that you have configured your Capsule Servers accordingly. For more information, see Configuring Capsule for Host Registration and Provisioning in Installing Capsule Server . Configuring Satellite Server to use iPXE You can use the default template to configure iPXE booting for hosts. If you want to change the default values in the template, clone the template and edit the clone. Procedure In the Satellite web UI, navigate to Hosts > Provisioning Templates . Enter PXELinux chain iPXE to use ipxe.lkrn or, for BIOS systems, enter PXELinux chain iPXE UNDI to use undionly-ipxe.0 , and click Search . Optional: If you want to change the template, click Clone , enter a unique name, and click Submit . Click the name of the template you want to use. If you clone the template, you can make changes you require on the Template tab. Click the Association tab, and select the operating systems that your host uses. Click the Locations tab, and add the location where the host resides. Click the Organizations tab, and add the organization that the host belongs to. Click Submit to save the changes. In the Provisioning Templates page, enter Kickstart default iPXE into the search field and click Search . Optional: If you want to change the template, click Clone , enter a unique name, and click Submit . Click the name of the template you want to use. If you clone the template, you can make changes you require on the Template tab. Click the Association tab, and associate the template with the operating system that your host uses. Click the Locations tab, and add the location where the host resides. Click the Organizations tab, and add the organization that the host belongs to. Click Submit to save the changes. In the Satellite web UI, navigate to Hosts > Operating systems and select the operating system of your host. Click the Templates tab. From the PXELinux template list, select the template you want to use. From the iPXE template list, select the template you want to use. Click Submit to save the changes. In the Satellite web UI, navigate to Configure > Host Groups , and select the host group you want to configure. Select the Operating System tab. Select the Architecture and Operating system . Set PXE Loader to PXELinux BIOS to chainboot iPXE via PXELinux, or to iPXE Chain BIOS to load undionly-ipxe.0 directly. Set the HTTP URL. If you want to use Satellite Server for booting, run the following command on Satellite Server: If you want to use Capsule for booting, run the following command on Capsule:
[ "satellite-installer --foreman-proxy-httpboot true --foreman-proxy-tftp true", "yum install ipxe-bootimgs", "restorecon -RvF /var/lib/tftpboot/", "cp /usr/share/ipxe/ipxe.lkrn /var/lib/tftpboot/", "cp /usr/share/ipxe/undionly.kpxe /var/lib/tftpboot/undionly-ipxe.0", "satellite-installer --foreman-proxy-dhcp-ipxefilename \"http:// satellite.example.com /unattended/iPXE?bootstrap=1\"", "satellite-installer --foreman-proxy-dhcp-ipxefilename \"http:// capsule.example.com /unattended/iPXE?bootstrap=1\"", "satellite-installer --foreman-proxy-dhcp-ipxefilename \"http:// satellite.example.com /unattended/iPXE?bootstrap=1\"", "satellite-installer --foreman-proxy-dhcp-ipxefilename \"http:// capsule.example.com /unattended/iPXE?bootstrap=1\"" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/provisioning_hosts/configuring_ipxe_to_reduce_provisioning_times_provisioning
Chapter 2. Console monitoring and alerting
Chapter 2. Console monitoring and alerting Red Hat Quay provides support for monitoring instances that were deployed by using the Red Hat Quay Operator, from inside the OpenShift Container Platform console. The new monitoring features include a Grafana dashboard, access to individual metrics, and alerting to notify for frequently restarting Quay pods. Note To enable the monitoring features, the Red Hat Quay Operator must be installed in All Namespaces mode. 2.1. Dashboard On the OpenShift Container Platform console, click Monitoring Dashboards and search for the dashboard of your desired Red Hat Quay registry instance: The dashboard shows various statistics including the following: The number of Organizations , Repositories , Users , and Robot accounts CPU Usage Max memory usage Rates of pulls and pushes, and authentication requests API request rate Latencies 2.2. Metrics You can see the underlying metrics behind the Red Hat Quay dashboard by accessing Monitoring Metrics in the UI. In the Expression field, enter the text quay_ to see the list of metrics available: Select a sample metric, for example, quay_org_rows : This metric shows the number of organizations in the registry. It is also directly surfaced in the dashboard. 2.3. Alerting An alert is raised if the Quay pods restart too often. The alert can be configured by accessing the Alerting rules tab from Monitoring Alerting in the console UI and searching for the Quay-specific alert: Select the QuayPodFrequentlyRestarting rule detail to configure the alert:
null
https://docs.redhat.com/en/documentation/red_hat_quay/3.10/html/red_hat_quay_operator_features/operator-console-monitoring-alerting
Chapter 3. User tasks
Chapter 3. User tasks 3.1. Creating applications from installed Operators This guide walks developers through an example of creating applications from an installed Operator using the OpenShift Container Platform web console. 3.1.1. Creating an etcd cluster using an Operator This procedure walks through creating a new etcd cluster using the etcd Operator, managed by Operator Lifecycle Manager (OLM). Prerequisites Access to an OpenShift Container Platform 4.16 cluster. The etcd Operator already installed cluster-wide by an administrator. Procedure Create a new project in the OpenShift Container Platform web console for this procedure. This example uses a project called my-etcd . Navigate to the Operators Installed Operators page. The Operators that have been installed to the cluster by the cluster administrator and are available for use are shown here as a list of cluster service versions (CSVs). CSVs are used to launch and manage the software provided by the Operator. Tip You can get this list from the CLI using: USD oc get csv On the Installed Operators page, click the etcd Operator to view more details and available actions. As shown under Provided APIs , this Operator makes available three new resource types, including one for an etcd Cluster (the EtcdCluster resource). These objects work similar to the built-in native Kubernetes ones, such as Deployment or ReplicaSet , but contain logic specific to managing etcd. Create a new etcd cluster: In the etcd Cluster API box, click Create instance . The page allows you to make any modifications to the minimal starting template of an EtcdCluster object, such as the size of the cluster. For now, click Create to finalize. This triggers the Operator to start up the pods, services, and other components of the new etcd cluster. Click the example etcd cluster, then click the Resources tab to see that your project now contains a number of resources created and configured automatically by the Operator. Verify that a Kubernetes service has been created that allows you to access the database from other pods in your project. All users with the edit role in a given project can create, manage, and delete application instances (an etcd cluster, in this example) managed by Operators that have already been created in the project, in a self-service manner, just like a cloud service. If you want to enable additional users with this ability, project administrators can add the role using the following command: USD oc policy add-role-to-user edit <user> -n <target_project> You now have an etcd cluster that will react to failures and rebalance data as pods become unhealthy or are migrated between nodes in the cluster. Most importantly, cluster administrators or developers with proper access can now easily use the database with their applications. 3.2. Installing Operators in your namespace If a cluster administrator has delegated Operator installation permissions to your account, you can install and subscribe an Operator to your namespace in a self-service manner. 3.2.1. Prerequisites A cluster administrator must add certain permissions to your OpenShift Container Platform user account to allow self-service Operator installation to a namespace. See Allowing non-cluster administrators to install Operators for details. 3.2.2. About Operator installation with OperatorHub OperatorHub is a user interface for discovering Operators; it works in conjunction with Operator Lifecycle Manager (OLM), which installs and manages Operators on a cluster. As a user with the proper permissions, you can install an Operator from OperatorHub by using the OpenShift Container Platform web console or CLI. During installation, you must determine the following initial settings for the Operator: Installation Mode Choose a specific namespace in which to install the Operator. Update Channel If an Operator is available through multiple channels, you can choose which channel you want to subscribe to. For example, to deploy from the stable channel, if available, select it from the list. Approval Strategy You can choose automatic or manual updates. If you choose automatic updates for an installed Operator, when a new version of that Operator is available in the selected channel, Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without human intervention. If you select manual updates, when a newer version of an Operator is available, OLM creates an update request. As a cluster administrator, you must then manually approve that update request to have the Operator updated to the new version. Understanding OperatorHub 3.2.3. Installing from OperatorHub by using the web console You can install and subscribe to an Operator from OperatorHub by using the OpenShift Container Platform web console. Prerequisites Access to an OpenShift Container Platform cluster using an account with Operator installation permissions. Procedure Navigate in the web console to the Operators OperatorHub page. Scroll or type a keyword into the Filter by keyword box to find the Operator you want. For example, type advanced to find the Advanced Cluster Management for Kubernetes Operator. You can also filter options by Infrastructure Features . For example, select Disconnected if you want to see Operators that work in disconnected environments, also known as restricted network environments. Select the Operator to display additional information. Note Choosing a Community Operator warns that Red Hat does not certify Community Operators; you must acknowledge the warning before continuing. Read the information about the Operator and click Install . On the Install Operator page, configure your Operator installation: If you want to install a specific version of an Operator, select an Update channel and Version from the lists. You can browse the various versions of an Operator across any channels it might have, view the metadata for that channel and version, and select the exact version you want to install. Note The version selection defaults to the latest version for the channel selected. If the latest version for the channel is selected, the Automatic approval strategy is enabled by default. Otherwise, Manual approval is required when not installing the latest version for the selected channel. Installing an Operator with Manual approval causes all Operators installed within the namespace to function with the Manual approval strategy and all Operators are updated together. If you want to update Operators independently, install Operators into separate namespaces. Choose a specific, single namespace in which to install the Operator. The Operator will only watch and be made available for use in this single namespace. For clusters on cloud providers with token authentication enabled: If the cluster uses AWS STS ( STS Mode in the web console), enter the Amazon Resource Name (ARN) of the AWS IAM role of your service account in the role ARN field. To create the role's ARN, follow the procedure described in Preparing AWS account . If the cluster uses Microsoft Entra Workload ID ( Workload Identity / Federated Identity Mode in the web console), add the client ID, tenant ID, and subscription ID in the appropriate field. For Update approval , select either the Automatic or Manual approval strategy. Important If the web console shows that the cluster uses AWS STS or Microsoft Entra Workload ID, you must set Update approval to Manual . Subscriptions with automatic update approvals are not recommended because there might be permission changes to make prior to updating. Subscriptions with manual update approvals ensure that administrators have the opportunity to verify the permissions of the later version and take any necessary steps prior to update. Click Install to make the Operator available to the selected namespaces on this OpenShift Container Platform cluster: If you selected a Manual approval strategy, the upgrade status of the subscription remains Upgrading until you review and approve the install plan. After approving on the Install Plan page, the subscription upgrade status moves to Up to date . If you selected an Automatic approval strategy, the upgrade status should resolve to Up to date without intervention. Verification After the upgrade status of the subscription is Up to date , select Operators Installed Operators to verify that the cluster service version (CSV) of the installed Operator eventually shows up. The Status should eventually resolve to Succeeded in the relevant namespace. Note For the All namespaces... installation mode, the status resolves to Succeeded in the openshift-operators namespace, but the status is Copied if you check in other namespaces. If it does not: Check the logs in any pods in the openshift-operators project (or other relevant namespace if A specific namespace... installation mode was selected) on the Workloads Pods page that are reporting issues to troubleshoot further. When the Operator is installed, the metadata indicates which channel and version are installed. Note The Channel and Version dropdown menus are still available for viewing other version metadata in this catalog context. 3.2.4. Installing from OperatorHub by using the CLI Instead of using the OpenShift Container Platform web console, you can install an Operator from OperatorHub by using the CLI. Use the oc command to create or update a Subscription object. For SingleNamespace install mode, you must also ensure an appropriate Operator group exists in the related namespace. An Operator group, defined by an OperatorGroup object, selects target namespaces in which to generate required RBAC access for all Operators in the same namespace as the Operator group. Tip In most cases, the web console method of this procedure is preferred because it automates tasks in the background, such as handling the creation of OperatorGroup and Subscription objects automatically when choosing SingleNamespace mode. Prerequisites Access to an OpenShift Container Platform cluster using an account with Operator installation permissions. You have installed the OpenShift CLI ( oc ). Procedure View the list of Operators available to the cluster from OperatorHub: USD oc get packagemanifests -n openshift-marketplace Example 3.1. Example output NAME CATALOG AGE 3scale-operator Red Hat Operators 91m advanced-cluster-management Red Hat Operators 91m amq7-cert-manager Red Hat Operators 91m # ... couchbase-enterprise-certified Certified Operators 91m crunchy-postgres-operator Certified Operators 91m mongodb-enterprise Certified Operators 91m # ... etcd Community Operators 91m jaeger Community Operators 91m kubefed Community Operators 91m # ... Note the catalog for your desired Operator. Inspect your desired Operator to verify its supported install modes and available channels: USD oc describe packagemanifests <operator_name> -n openshift-marketplace Example 3.2. Example output # ... Kind: PackageManifest # ... Install Modes: 1 Supported: true Type: OwnNamespace Supported: true Type: SingleNamespace Supported: false Type: MultiNamespace Supported: true Type: AllNamespaces # ... Entries: Name: example-operator.v3.7.11 Version: 3.7.11 Name: example-operator.v3.7.10 Version: 3.7.10 Name: stable-3.7 2 # ... Entries: Name: example-operator.v3.8.5 Version: 3.8.5 Name: example-operator.v3.8.4 Version: 3.8.4 Name: stable-3.8 3 Default Channel: stable-3.8 4 1 Indicates which install modes are supported. 2 3 Example channel names. 4 The channel selected by default if one is not specified. Tip You can print an Operator's version and channel information in YAML format by running the following command: USD oc get packagemanifests <operator_name> -n <catalog_namespace> -o yaml If more than one catalog is installed in a namespace, run the following command to look up the available versions and channels of an Operator from a specific catalog: USD oc get packagemanifest \ --selector=catalog=<catalogsource_name> \ --field-selector metadata.name=<operator_name> \ -n <catalog_namespace> -o yaml Important If you do not specify the Operator's catalog, running the oc get packagemanifest and oc describe packagemanifest commands might return a package from an unexpected catalog if the following conditions are met: Multiple catalogs are installed in the same namespace. The catalogs contain the same Operators or Operators with the same name. If the Operator you intend to install supports the AllNamespaces install mode, and you choose to use this mode, skip this step, because the openshift-operators namespace already has an appropriate Operator group in place by default, called global-operators . If the Operator you intend to install supports the SingleNamespace install mode, and you choose to use this mode, you must ensure an appropriate Operator group exists in the related namespace. If one does not exist, you can create create one by following these steps: Important You can only have one Operator group per namespace. For more information, see "Operator groups". Create an OperatorGroup object YAML file, for example operatorgroup.yaml , for SingleNamespace install mode: Example OperatorGroup object for SingleNamespace install mode apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <operatorgroup_name> namespace: <namespace> 1 spec: targetNamespaces: - <namespace> 2 1 2 For SingleNamespace install mode, use the same <namespace> value for both the metadata.namespace and spec.targetNamespaces fields. Create the OperatorGroup object: USD oc apply -f operatorgroup.yaml Create a Subscription object to subscribe a namespace to an Operator: Create a YAML file for the Subscription object, for example subscription.yaml : Note If you want to subscribe to a specific version of an Operator, set the startingCSV field to the desired version and set the installPlanApproval field to Manual to prevent the Operator from automatically upgrading if a later version exists in the catalog. For details, see the following "Example Subscription object with a specific starting Operator version". Example 3.3. Example Subscription object apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: <subscription_name> namespace: <namespace_per_install_mode> 1 spec: channel: <channel_name> 2 name: <operator_name> 3 source: <catalog_name> 4 sourceNamespace: <catalog_source_namespace> 5 config: env: 6 - name: ARGS value: "-v=10" envFrom: 7 - secretRef: name: license-secret volumes: 8 - name: <volume_name> configMap: name: <configmap_name> volumeMounts: 9 - mountPath: <directory_name> name: <volume_name> tolerations: 10 - operator: "Exists" resources: 11 requests: memory: "64Mi" cpu: "250m" limits: memory: "128Mi" cpu: "500m" nodeSelector: 12 foo: bar 1 For default AllNamespaces install mode usage, specify the openshift-operators namespace. Alternatively, you can specify a custom global namespace, if you have created one. For SingleNamespace install mode usage, specify the relevant single namespace. 2 Name of the channel to subscribe to. 3 Name of the Operator to subscribe to. 4 Name of the catalog source that provides the Operator. 5 Namespace of the catalog source. Use openshift-marketplace for the default OperatorHub catalog sources. 6 The env parameter defines a list of environment variables that must exist in all containers in the pod created by OLM. 7 The envFrom parameter defines a list of sources to populate environment variables in the container. 8 The volumes parameter defines a list of volumes that must exist on the pod created by OLM. 9 The volumeMounts parameter defines a list of volume mounts that must exist in all containers in the pod created by OLM. If a volumeMount references a volume that does not exist, OLM fails to deploy the Operator. 10 The tolerations parameter defines a list of tolerations for the pod created by OLM. 11 The resources parameter defines resource constraints for all the containers in the pod created by OLM. 12 The nodeSelector parameter defines a NodeSelector for the pod created by OLM. Example 3.4. Example Subscription object with a specific starting Operator version apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: example-operator namespace: example-operator spec: channel: stable-3.7 installPlanApproval: Manual 1 name: example-operator source: custom-operators sourceNamespace: openshift-marketplace startingCSV: example-operator.v3.7.10 2 1 Set the approval strategy to Manual in case your specified version is superseded by a later version in the catalog. This plan prevents an automatic upgrade to a later version and requires manual approval before the starting CSV can complete the installation. 2 Set a specific version of an Operator CSV. For clusters on cloud providers with token authentication enabled, configure your Subscription object by following these steps: Ensure the Subscription object is set to manual update approvals: kind: Subscription # ... spec: installPlanApproval: Manual 1 1 Subscriptions with automatic update approvals are not recommended because there might be permission changes to make prior to updating. Subscriptions with manual update approvals ensure that administrators have the opportunity to verify the permissions of the later version and take any necessary steps prior to update. Include the relevant cloud provider-specific fields in the Subscription object's config section: If the cluster is in AWS STS mode, include the following fields: kind: Subscription # ... spec: config: env: - name: ROLEARN value: "<role_arn>" 1 1 Include the role ARN details. If the cluster is in Microsoft Entra Workload ID mode, include the following fields: kind: Subscription # ... spec: config: env: - name: CLIENTID value: "<client_id>" 1 - name: TENANTID value: "<tenant_id>" 2 - name: SUBSCRIPTIONID value: "<subscription_id>" 3 1 Include the client ID. 2 Include the tenant ID. 3 Include the subscription ID. Create the Subscription object by running the following command: USD oc apply -f subscription.yaml If you set the installPlanApproval field to Manual , manually approve the pending install plan to complete the Operator installation. For more information, see "Manually approving a pending Operator update". At this point, OLM is now aware of the selected Operator. A cluster service version (CSV) for the Operator should appear in the target namespace, and APIs provided by the Operator should be available for creation. Verification Check the status of the Subscription object for your installed Operator by running the following command: USD oc describe subscription <subscription_name> -n <namespace> If you created an Operator group for SingleNamespace install mode, check the status of the OperatorGroup object by running the following command: USD oc describe operatorgroup <operatorgroup_name> -n <namespace> Additional resources Operator groups Channel names Additional resources Manually approving a pending Operator update
[ "oc get csv", "oc policy add-role-to-user edit <user> -n <target_project>", "oc get packagemanifests -n openshift-marketplace", "NAME CATALOG AGE 3scale-operator Red Hat Operators 91m advanced-cluster-management Red Hat Operators 91m amq7-cert-manager Red Hat Operators 91m couchbase-enterprise-certified Certified Operators 91m crunchy-postgres-operator Certified Operators 91m mongodb-enterprise Certified Operators 91m etcd Community Operators 91m jaeger Community Operators 91m kubefed Community Operators 91m", "oc describe packagemanifests <operator_name> -n openshift-marketplace", "Kind: PackageManifest Install Modes: 1 Supported: true Type: OwnNamespace Supported: true Type: SingleNamespace Supported: false Type: MultiNamespace Supported: true Type: AllNamespaces Entries: Name: example-operator.v3.7.11 Version: 3.7.11 Name: example-operator.v3.7.10 Version: 3.7.10 Name: stable-3.7 2 Entries: Name: example-operator.v3.8.5 Version: 3.8.5 Name: example-operator.v3.8.4 Version: 3.8.4 Name: stable-3.8 3 Default Channel: stable-3.8 4", "oc get packagemanifests <operator_name> -n <catalog_namespace> -o yaml", "oc get packagemanifest --selector=catalog=<catalogsource_name> --field-selector metadata.name=<operator_name> -n <catalog_namespace> -o yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <operatorgroup_name> namespace: <namespace> 1 spec: targetNamespaces: - <namespace> 2", "oc apply -f operatorgroup.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: <subscription_name> namespace: <namespace_per_install_mode> 1 spec: channel: <channel_name> 2 name: <operator_name> 3 source: <catalog_name> 4 sourceNamespace: <catalog_source_namespace> 5 config: env: 6 - name: ARGS value: \"-v=10\" envFrom: 7 - secretRef: name: license-secret volumes: 8 - name: <volume_name> configMap: name: <configmap_name> volumeMounts: 9 - mountPath: <directory_name> name: <volume_name> tolerations: 10 - operator: \"Exists\" resources: 11 requests: memory: \"64Mi\" cpu: \"250m\" limits: memory: \"128Mi\" cpu: \"500m\" nodeSelector: 12 foo: bar", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: example-operator namespace: example-operator spec: channel: stable-3.7 installPlanApproval: Manual 1 name: example-operator source: custom-operators sourceNamespace: openshift-marketplace startingCSV: example-operator.v3.7.10 2", "kind: Subscription spec: installPlanApproval: Manual 1", "kind: Subscription spec: config: env: - name: ROLEARN value: \"<role_arn>\" 1", "kind: Subscription spec: config: env: - name: CLIENTID value: \"<client_id>\" 1 - name: TENANTID value: \"<tenant_id>\" 2 - name: SUBSCRIPTIONID value: \"<subscription_id>\" 3", "oc apply -f subscription.yaml", "oc describe subscription <subscription_name> -n <namespace>", "oc describe operatorgroup <operatorgroup_name> -n <namespace>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/operators/user-tasks
8.105. kexec-tools
8.105. kexec-tools 8.105.1. RHBA-2014:1502 - kexec-tools bug fix and enhancement update Updated kexec-tools packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The kexec-tools packages contain the /sbin/kexec binary and utilities that together form the user-space component of the kernel's kexec feature. The /sbin/kexec binary facilitates a new kernel to boot using the kernel's kexec feature either on a normal or a panic reboot. The kexec fastboot mechanism allows booting a Linux kernel from the context of an already running kernel. Bug Fixes BZ# 806992 Previously, if the system had configured two or more network interfaces, where one of the devices was configured with a default gateway, and another with a static route to a private network, kdump ignored the non-default static route. As a consequence, kdump failed to dump a core file over NFS or SSH because it did not configure the route to the private network. This bug has been fixed and now kdump successfully dumps the kernel over NFS or SSH as expected. BZ# 1061480 Previously, booting the crash kernel with more than one CPUs occasionally caused some systems to become unresponsive when the crash happened on an Application Processor (AP) and not on the Boot Strap Processor (BSP). To fix this bug, the initialization scripts were modified to include the disable_cpu_apicid kernel option automatically which acts as the BSP ID. Additionally, the user has to modify the value of the nr_cpus option to specify the number of CPUs used on the system. With this fix, the user can now successfully use the crash kernel with more than one CPUs on the system. BZ# 1128248 Due to a bug in the wait_for_multipath routine in the mkdumprd utility, kdump could fail to dump a core file on certain configurations with many multipath devices. This problem has been addressed with this update, and kdump now works as expected on systems with a large number of multipath devices. BZ# 1022871 Previously, kdump was unable to capture a core file on IBM System z machines with a DASD FBA device specified as a kdump target. This problem has been fixed by adding the necessary support for the DASD FBA type device to kdump, and a core file can now be captured as expected on the above configuration. BZ# 1122880 Due to an incorrect SELinux test condition in the mkdumprd utility, the kdump kernel could fail to load a SELinux policy and produce an unknown operand error. This update corrects the affected condition, and kdump now behaves as intended. BZ# 1122883 The mkdumprd utility could previously emit spurious warning messages about non-existent ifcfg files under certain circumstances. This problem has been fixed and kdump no longer emits these warning messages. In addition, this update adds the following Enhancements BZ# 929312 , BZ# 823561 , BZ# 1035156 The makedumpfile tool has been upgraded to version 1.5.6, which provides a number of bug fixes and enhancements over the version, including enhanced filtering and support for custom EPPIC macros in order to eliminate complex data structures, cryptographic keys, and any other specified sensitive data from dump files. BZ# 1083938 As a part of support for the kdump_fence agent in cluster environment, the new options, fence_kdump_nodes and fence_kdump_args, have been introduced to the kdump.conf file. The fence_kdump_nodes option is used to list the hosts to send notifications from the kdump_fence agent to. The fence_kdump_args is used for passing command-line arguments to the kdump_fence agent. Users of kexec-tools are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/kexec-tools
Chapter 9. Verify your deployment
Chapter 9. Verify your deployment After deployment is complete, verify that your deployment has completed successfully. Browse to the Administration Portal, for example, http://engine.example.com/ovirt-engine . Administration Console Login Log in using the administrative credentials added during hosted engine deployment. When login is successful, the Dashboard appears. Administration Console Dashboard Verify that your cluster is available. Administration Console Dashboard - Clusters Verify that one host is available. Click Compute Hosts . Verify that your host is listed with a Status of Up . Verify that all storage domains are available. Click Storage Domains . Verify that the Active icon is shown in the first column. Administration Console - Storage Domains
null
https://docs.redhat.com/en/documentation/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/deploying_red_hat_hyperconverged_infrastructure_for_virtualization_on_a_single_node/verify-rhhi-deployment
11.3. Date Specifications
11.3. Date Specifications Date specifications are used to create cron-like expressions relating to time. Each field can contain a single number or a single range. Instead of defaulting to zero, any field not supplied is ignored. For example, monthdays="1" matches the first day of every month and hours="09-17" matches the hours between 9 am and 5 pm (inclusive). However, you cannot specify weekdays="1,2" or weekdays="1-2,5-6" since they contain multiple ranges. Table 11.5. Properties of a Date Specification Field Description id A unique name for the date hours Allowed values: 0-23 monthdays Allowed values: 0-31 (depending on month and year) weekdays Allowed values: 1-7 (1=Monday, 7=Sunday) yeardays Allowed values: 1-366 (depending on the year) months Allowed values: 1-12 weeks Allowed values: 1-53 (depending on weekyear ) years Year according the Gregorian calendar weekyears May differ from Gregorian years; for example, 2005-001 Ordinal is also 2005-01-01 Gregorian is also 2004-W53-6 Weekly moon Allowed values: 0-7 (0 is new, 4 is full moon).
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/_date_specifications
1.2. A Three-Tier LVS Configuration
1.2. A Three-Tier LVS Configuration Figure 1.2, "A Three-Tier LVS Configuration" shows a typical three-tier LVS topology. In this example, the active LVS router routes the requests from the Internet to the pool of real servers. Each of the real servers then accesses a shared data source over the network. Figure 1.2. A Three-Tier LVS Configuration This configuration is ideal for busy FTP servers, where accessible data is stored on a central, highly available server and accessed by each real server via an exported NFS directory or Samba share. This topology is also recommended for websites that access a central, highly available database for transactions. Additionally, using an active-active configuration with Red Hat Cluster Manager, administrators can configure one high-availability cluster to serve both of these roles simultaneously. The third tier in the above example does not have to use Red Hat Cluster Manager, but failing to use a highly available solution would introduce a critical single point of failure.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/virtual_server_administration/s1-lvs-cm-vsa
probe::scheduler.wait_task
probe::scheduler.wait_task Name probe::scheduler.wait_task - Waiting on a task to unschedule (become inactive) Synopsis scheduler.wait_task Values task_pid PID of the task the scheduler is waiting on name name of the probe point task_priority priority of the task
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-scheduler-wait-task
Chapter 3. Shenandoah garbage collector modes
Chapter 3. Shenandoah garbage collector modes You can run Shenandoah in three different modes. Select a specific mode with the -XX:ShenandoahGCMode=<name> . The following list describes each Shenandoah mode: normal/satb (product, default) This mode runs a concurrent garbage collector (GC) with Snapshot-At-The-Beginning (SATB) marking. This marking mode does the similar work as G1, the default garbage collector for Red Hat build of OpenJDK 11. iu (experimental) This mode runs a concurrent GC with Incremental Update (IU) marking. It can reclaim unreachably memory more aggressively. This marking mode mirrors the SATB mode. This may make marking less conservative, especially around accessing weak references. passive (diagnostic) This mode runs Stop the World Event GCs. This mode is used for functional testing, but sometimes it is useful for bisecting performance anomalies with GC barriers, or to ascertain the actual live data size in the application.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/using_shenandoah_garbage_collector_with_red_hat_build_of_openjdk_11/different-modes-to-run-shenandoah-gc
Chapter 4. View OpenShift Data Foundation Topology
Chapter 4. View OpenShift Data Foundation Topology The topology shows the mapped visualization of the OpenShift Data Foundation storage cluster at various abstraction levels and also lets you to interact with these layers. The view also shows how the various elements compose the Storage cluster altogether. Procedure On the OpenShift Web Console, navigate to Storage Data Foundation Topology . The view shows the storage cluster and the zones inside it. You can see the nodes depicted by circular entities within the zones, which are indicated by dotted lines. The label of each item or resource contains basic information such as status and health or indication for alerts. Choose a node to view node details on the right-hand panel. You can also access resources or deployments within a node by clicking on the search/preview decorator icon. To view deployment details Click the preview decorator on a node. A modal window appears above the node that displays all of the deployments associated with that node along with their statuses. Click the Back to main view button in the model's upper left corner to close and return to the view. Select a specific deployment to see more information about it. All relevant data is shown in the side panel. Click the Resources tab to view the pods information. This tab provides a deeper understanding of the problems and offers granularity that aids in better troubleshooting. Click the pod links to view the pod information page on OpenShift Container Platform. The link opens in a new window.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/deploying_openshift_data_foundation_on_any_platform/viewing-odf-topology_mcg-verify
Chapter 8. Assigning a Puppet class to an individual host
Chapter 8. Assigning a Puppet class to an individual host Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Locate the host you want to add the ntp Puppet class to and click Edit . Select the Puppet ENC tab and look for the ntp class. Click the + symbol to ntp to add the ntp submodule to the list of included classes . Click Submit to save your changes. Tip If the Puppet classes tab of an individual host is empty, check if it is assigned to the proper Puppet environment. Verify the Puppet configuration. Navigate to Hosts > All Hosts and select the host. From the top overflow menu, select Legacy UI . Under Details , click Puppet YAML . This produces output similar as follows: --- parameters: // shortened YAML output classes: ntp: servers: '["0.de.pool.ntp.org","1.de.pool.ntp.org","2.de.pool.ntp.org","3.de.pool.ntp.org"]' environment: production ... Verify the ntp configuration. Connect to your host using SSH and check the content of /etc/ntp.conf . This example assumes your host is running CentOS 7 . Other operating systems may store the ntp config file in a different path. Tip You may need to run the Puppet agent on your host by executing the following command: Running the following command on the host checks which ntp servers are used for clock synchronization: This returns output similar as follows: # ntp.conf: Managed by puppet. server 0.de.pool.ntp.org server 1.de.pool.ntp.org server 2.de.pool.ntp.org server 3.de.pool.ntp.org You now have a working ntp module which you can add to a host or group of hosts to roll out your ntp configuration automatically.
[ "--- parameters: // shortened YAML output classes: ntp: servers: '[\"0.de.pool.ntp.org\",\"1.de.pool.ntp.org\",\"2.de.pool.ntp.org\",\"3.de.pool.ntp.org\"]' environment: production", "puppet agent -t", "cat /etc/ntp.conf", "ntp.conf: Managed by puppet. server 0.de.pool.ntp.org server 1.de.pool.ntp.org server 2.de.pool.ntp.org server 3.de.pool.ntp.org" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/managing_configurations_by_using_puppet_integration/assigning-a-puppet-class-to-an-individual-host_managing-configurations-puppet
Chapter 10. Networking
Chapter 10. Networking Trusted Network Connect Red Hat Enterprise Linux 7.1 introduces the Trusted Network Connect functionality as a Technology Preview. Trusted Network Connect is used with existing network access control (NAC) solutions, such as TLS, 802.1X, or IPsec to integrate endpoint posture assessment; that is, collecting an endpoint's system information (such as operating system configuration settings, installed packages, and others, termed as integrity measurements). Trusted Network Connect is used to verify these measurements against network access policies before allowing the endpoint to access the network. SR-IOV Functionality in the qlcnic Driver Support for Single-Root I/O virtualization (SR-IOV) has been added to the qlcnic driver as a Technology Preview. Support for this functionality will be provided directly by QLogic, and customers are encouraged to provide feedback to QLogic and Red Hat. Other functionality in the qlcnic driver remains fully supported. Berkeley Packet Filter Support for a Berkeley Packet Filter (BPF) based traffic classifier has been added to Red Hat Enterprise Linux 7.1. BPF is used in packet filtering for packet sockets, for sand-boxing in secure computing mode ( seccomp ), and in Netfilter. BPF has a just-in-time implementation for the most important architectures and has a rich syntax for building filters. Improved Clock Stability Previously, test results indicated that disabling the tickless kernel capability could significantly improve the stability of the system clock. The kernel tickless mode can be disabled by adding nohz=off to the kernel boot option parameters. However, recent improvements applied to the kernel in Red Hat Enterprise Linux 7.1 have greatly improved the stability of the system clock and the difference in stability of the clock with and without nohz=off should be much smaller now for most users. This is useful for time synchronization applications using PTP and NTP . libnetfilter_queue Packages The libnetfilter_queue package has been added to Red Hat Enterprise Linux 7.1. libnetfilter_queue is a user space library providing an API to packets that have been queued by the kernel packet filter. It enables receiving queued packets from the kernel nfnetlink_queue subsystem, parsing of the packets, rewriting packet headers, and re-injecting altered packets. Teaming Enhancements The libteam packages have been updated to version 1.15 in Red Hat Enterprise Linux 7.1. It provides a number of bug fixes and enhancements, in particular, teamd can now be automatically re-spawned by systemd , which increases overall reliability. Intel QuickAssist Technology Driver Intel QuickAssist Technology (QAT) driver has been added to Red Hat Enterprise Linux 7.1. The QAT driver enables QuickAssist hardware which adds hardware offload crypto capabilities to a system. LinuxPTP timemaster Support for Failover between PTP and NTP The linuxptp package has been updated to version 1.4 in Red Hat Enterprise Linux 7.1. It provides a number of bug fixes and enhancements, in particular, support for failover between PTP domains and NTP sources using the timemaster application. When there are multiple PTP domains available on the network, or fallback to NTP is needed, the timemaster program can be used to synchronize the system clock to all available time sources. Network initscripts Support for custom VLAN names has been added in Red Hat Enterprise Linux 7.1. Improved support for IPv6 in GRE tunnels has been added; the inner address now persists across reboots. TCP Delayed ACK Support for a configurable TCP Delayed ACK has been added to the iproute package in Red Hat Enterprise Linux 7.1. This can be enabled by the ip route quickack command. NetworkManager NetworkManager has been updated to version 1.0 in Red Hat Enterprise Linux 7.1. The support for Wi-Fi, Bluetooth, wireless wide area network (WWAN), ADSL, and team has been split into separate subpackages to allow for smaller installations. To support smaller environments, this update introduces an optional built-in Dynamic Host Configuration Protocol (DHCP) client that uses less memory. A new NetworkManager mode for static networking configurations that starts NetworkManager, configures interfaces and then quits, has been added. NetworkManager provides better cooperation with non-NetworkManager managed devices, specifically by no longer setting the IFF_UP flag on these devices. In addition, NetworkManager is aware of connections created outside of itself and is able to save these to be used within NetworkManager if desired. In Red Hat Enterprise Linux 7.1, NetworkManager assigns a default route for each interface allowed to have one. The metric of each default route is adjusted to select the global default interface, and this metric may be customized to prefer certain interfaces over others. Default routes added by other programs are not modified by NetworkManager. Improvements have been made to NetworkManager's IPv6 configuration, allowing it to respect IPv6 router advertisement MTUs and keeping manually configured static IPv6 addresses even if automatic configuration fails. In addition, WWAN connections now support IPv6 if the modem and provider support it. Various improvements to dispatcher scripts have been made, including support for a pre-up and pre-down script. Bonding option lacp_rate is now supported in Red Hat Enterprise Linux 7.1. NetworkManager has been enhanced to provide easy device renaming when renaming master interfaces with slave interfaces. A priority setting has been added to the auto-connect function of NetworkManager . Now, if more than one eligible candidate is available for auto-connect, NetworkManager selects the connection with the highest priority. If all available connections have equal priority values, NetworkManager uses the default behavior and selects the last active connection. This update also introduces numerous improvements to the nmcli command-line utility, including the ability to provide passwords when connecting to Wi-Fi or 802.1X networks. Network Namespaces and VTI Support for virtual tunnel interfaces ( VTI ) with network namespaces has been added in Red Hat Enterprise Linux 7.1. This enables traffic from a VTI to be passed between different namespaces when packets are encapsulated or de-encapsulated. Alternative Configuration Storage for the MemberOf Plug-In The configuration of the MemberOf plug-in for the Red Hat Directory Server can now be stored in a suffix mapped to a back-end database. This allows the MemberOf plug-in configuration to be replicated, which makes it easier for the user to maintain a consistent MemberOf plug-in configuration in a replicated environment.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.1_release_notes/chap-red_hat_enterprise_linux-7.1_release_notes-networking
Chapter 191. Kafka Component
Chapter 191. Kafka Component Available as of Camel version 2.13 The kafka: component is used for communicating with Apache Kafka message broker. Maven users will need to add the following dependency to their pom.xml for this component. <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-kafka</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> 191.1. URI format kafka:topic[?options] 191.2. Options The Kafka component supports 9 options, which are listed below. Name Description Default Type configuration (common) Allows to pre-configure the Kafka component with common options that the endpoints will reuse. KafkaConfiguration brokers (common) URL of the Kafka brokers to use. The format is host1:port1,host2:port2, and the list can be a subset of brokers or a VIP pointing to a subset of brokers. This option is known as bootstrap.servers in the Kafka documentation. String workerPool (advanced) To use a shared custom worker pool for continue routing Exchange after kafka server has acknowledge the message that was sent to it from KafkaProducer using asynchronous non-blocking processing. If using this option then you must handle the lifecycle of the thread pool to shut the pool down when no longer needed. ExecutorService useGlobalSslContext Parameters (security) Enable usage of global SSL context parameters. false boolean breakOnFirstError (consumer) This options controls what happens when a consumer is processing an exchange and it fails. If the option is false then the consumer continues to the message and processes it. If the option is true then the consumer breaks out, and will seek back to offset of the message that caused a failure, and then re-attempt to process this message. However this can lead to endless processing of the same message if its bound to fail every time, eg a poison message. Therefore its recommended to deal with that for example by using Camel's error handler. false boolean allowManualCommit (consumer) Whether to allow doing manual commits via KafkaManualCommit. If this option is enabled then an instance of KafkaManualCommit is stored on the Exchange message header, which allows end users to access this API and perform manual offset commits via the Kafka consumer. false boolean kafkaManualCommit Factory (consumer) Factory to use for creating KafkaManualCommit instances. This allows to plugin a custom factory to create custom KafkaManualCommit instances in case special logic is needed when doing manual commits that deviates from the default implementation that comes out of the box. KafkaManualCommit Factory resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean shutdownTimeout (common) Timeout in milliseconds to wait gracefully for the consumer or producer to shutdown and terminate its worker threads. 30000 int The Kafka endpoint is configured using URI syntax: with the following path and query parameters: 191.2.1. Path Parameters (1 parameters): Name Description Default Type topic Required Name of the topic to use. On the consumer you can use comma to separate multiple topics. A producer can only send a message to a single topic. String 191.2.2. Query Parameters (94 parameters): Name Description Default Type brokers (common) URL of the Kafka brokers to use. The format is host1:port1,host2:port2, and the list can be a subset of brokers or a VIP pointing to a subset of brokers. This option is known as bootstrap.servers in the Kafka documentation. String clientId (common) The client id is a user-specified string sent in each request to help trace calls. It should logically identify the application making the request. String headerFilterStrategy (common) To use a custom HeaderFilterStrategy to filter header to and from Camel message. HeaderFilterStrategy reconnectBackoffMaxMs (common) The maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect. If provided, the backoff per host will increase exponentially for each consecutive connection failure, up to this maximum. After calculating the backoff increase, 20% random jitter is added to avoid connection storms. 1000 Integer allowManualCommit (consumer) Whether to allow doing manual commits via KafkaManualCommit. If this option is enabled then an instance of KafkaManualCommit is stored on the Exchange message header, which allows end users to access this API and perform manual offset commits via the Kafka consumer. false boolean autoCommitEnable (consumer) If true, periodically commit to ZooKeeper the offset of messages already fetched by the consumer. This committed offset will be used when the process fails as the position from which the new consumer will begin. true Boolean autoCommitIntervalMs (consumer) The frequency in ms that the consumer offsets are committed to zookeeper. 5000 Integer autoCommitOnStop (consumer) Whether to perform an explicit auto commit when the consumer stops to ensure the broker has a commit from the last consumed message. This requires the option autoCommitEnable is turned on. The possible values are: sync, async, or none. And sync is the default value. sync String autoOffsetReset (consumer) What to do when there is no initial offset in ZooKeeper or if an offset is out of range: earliest : automatically reset the offset to the earliest offset latest : automatically reset the offset to the latest offset fail: throw exception to the consumer latest String breakOnFirstError (consumer) This options controls what happens when a consumer is processing an exchange and it fails. If the option is false then the consumer continues to the message and processes it. If the option is true then the consumer breaks out, and will seek back to offset of the message that caused a failure, and then re-attempt to process this message. However this can lead to endless processing of the same message if its bound to fail every time, eg a poison message. Therefore its recommended to deal with that for example by using Camel's error handler. false boolean bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean checkCrcs (consumer) Automatically check the CRC32 of the records consumed. This ensures no on-the-wire or on-disk corruption to the messages occurred. This check adds some overhead, so it may be disabled in cases seeking extreme performance. true Boolean consumerRequestTimeoutMs (consumer) The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted. 40000 Integer consumersCount (consumer) The number of consumers that connect to kafka server 1 int consumerStreams (consumer) Number of concurrent consumers on the consumer 10 int fetchMaxBytes (consumer) The maximum amount of data the server should return for a fetch request This is not an absolute maximum, if the first message in the first non-empty partition of the fetch is larger than this value, the message will still be returned to ensure that the consumer can make progress. The maximum message size accepted by the broker is defined via message.max.bytes (broker config) or max.message.bytes (topic config). Note that the consumer performs multiple fetches in parallel. 52428800 Integer fetchMinBytes (consumer) The minimum amount of data the server should return for a fetch request. If insufficient data is available the request will wait for that much data to accumulate before answering the request. 1 Integer fetchWaitMaxMs (consumer) The maximum amount of time the server will block before answering the fetch request if there isn't sufficient data to immediately satisfy fetch.min.bytes 500 Integer groupId (consumer) A string that uniquely identifies the group of consumer processes to which this consumer belongs. By setting the same group id multiple processes indicate that they are all part of the same consumer group. This option is required for consumers. String heartbeatIntervalMs (consumer) The expected time between heartbeats to the consumer coordinator when using Kafka's group management facilities. Heartbeats are used to ensure that the consumer's session stays active and to facilitate rebalancing when new consumers join or leave the group. The value must be set lower than session.timeout.ms, but typically should be set no higher than 1/3 of that value. It can be adjusted even lower to control the expected time for normal rebalances. 3000 Integer kafkaHeaderDeserializer (consumer) Sets custom KafkaHeaderDeserializer for deserialization kafka headers values to camel headers values. KafkaHeaderDeserializer keyDeserializer (consumer) Deserializer class for key that implements the Deserializer interface. org.apache.kafka.common.serialization.StringDeserializer String maxPartitionFetchBytes (consumer) The maximum amount of data per-partition the server will return. The maximum total memory used for a request will be #partitions max.partition.fetch.bytes. This size must be at least as large as the maximum message size the server allows or else it is possible for the producer to send messages larger than the consumer can fetch. If that happens, the consumer can get stuck trying to fetch a large message on a certain partition. 1048576 Integer maxPollIntervalMs (consumer) The maximum delay between invocations of poll() when using consumer group management. This places an upper bound on the amount of time that the consumer can be idle before fetching more records. If poll() is not called before expiration of this timeout, then the consumer is considered failed and the group will rebalance in order to reassign the partitions to another member. Long maxPollRecords (consumer) The maximum number of records returned in a single call to poll() 500 Integer offsetRepository (consumer) The offset repository to use in order to locally store the offset of each partition of the topic. Defining one will disable the autocommit. StateRepository partitionAssignor (consumer) The class name of the partition assignment strategy that the client will use to distribute partition ownership amongst consumer instances when group management is used org.apache.kafka.clients.consumer.RangeAssignor String pollTimeoutMs (consumer) The timeout used when polling the KafkaConsumer. 5000 Long seekTo (consumer) Set if KafkaConsumer will read from beginning or end on startup: beginning : read from beginning end : read from end This is replacing the earlier property seekToBeginning String sessionTimeoutMs (consumer) The timeout used to detect failures when using Kafka's group management facilities. 10000 Integer shutdownTimeout (common) Timeout in milliseconds to wait gracefully for the consumer or producer to shutdown and terminate its worker threads. 30000 int topicIsPattern (consumer) Whether the topic is a pattern (regular expression). This can be used to subscribe to dynamic number of topics matching the pattern. false boolean valueDeserializer (consumer) Deserializer class for value that implements the Deserializer interface. org.apache.kafka.common.serialization.StringDeserializer String exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern bridgeEndpoint (producer) If the option is true, then KafkaProducer will ignore the KafkaConstants.TOPIC header setting of the inbound message. false boolean bufferMemorySize (producer) The total bytes of memory the producer can use to buffer records waiting to be sent to the server. If records are sent faster than they can be delivered to the server the producer will either block or throw an exception based on the preference specified by block.on.buffer.full.This setting should correspond roughly to the total memory the producer will use, but is not a hard bound since not all memory the producer uses is used for buffering. Some additional memory will be used for compression (if compression is enabled) as well as for maintaining in-flight requests. 33554432 Integer circularTopicDetection (producer) If the option is true, then KafkaProducer will detect if the message is attempted to be sent back to the same topic it may come from, if the message was original from a kafka consumer. If the KafkaConstants.TOPIC header is the same as the original kafka consumer topic, then the header setting is ignored, and the topic of the producer endpoint is used. In other words this avoids sending the same message back to where it came from. This option is not in use if the option bridgeEndpoint is set to true. true boolean compressionCodec (producer) This parameter allows you to specify the compression codec for all data generated by this producer. Valid values are none, gzip and snappy. none String connectionMaxIdleMs (producer) Close idle connections after the number of milliseconds specified by this config. 540000 Integer enableIdempotence (producer) If set to 'true' the producer will ensure that exactly one copy of each message is written in the stream. If 'false', producer retries may write duplicates of the retried message in the stream. If set to true this option will require max.in.flight.requests.per.connection to be set to 1 and retries cannot be zero and additionally acks must be set to 'all'. false boolean kafkaHeaderSerializer (producer) Sets custom KafkaHeaderDeserializer for serialization camel headers values to kafka headers values. KafkaHeaderSerializer key (producer) The record key (or null if no key is specified). If this option has been configured then it take precedence over header KafkaConstants#KEY String keySerializerClass (producer) The serializer class for keys (defaults to the same as for messages if nothing is given). org.apache.kafka.common.serialization.StringSerializer String lingerMs (producer) The producer groups together any records that arrive in between request transmissions into a single batched request. Normally this occurs only under load when records arrive faster than they can be sent out. However in some circumstances the client may want to reduce the number of requests even under moderate load. This setting accomplishes this by adding a small amount of artificial delaythat is, rather than immediately sending out a record the producer will wait for up to the given delay to allow other records to be sent so that the sends can be batched together. This can be thought of as analogous to Nagle's algorithm in TCP. This setting gives the upper bound on the delay for batching: once we get batch.size worth of records for a partition it will be sent immediately regardless of this setting, however if we have fewer than this many bytes accumulated for this partition we will 'linger' for the specified time waiting for more records to show up. This setting defaults to 0 (i.e. no delay). Setting linger.ms=5, for example, would have the effect of reducing the number of requests sent but would add up to 5ms of latency to records sent in the absense of load. 0 Integer maxBlockMs (producer) The configuration controls how long sending to kafka will block. These methods can be blocked for multiple reasons. For e.g: buffer full, metadata unavailable.This configuration imposes maximum limit on the total time spent in fetching metadata, serialization of key and value, partitioning and allocation of buffer memory when doing a send(). In case of partitionsFor(), this configuration imposes a maximum time threshold on waiting for metadata 60000 Integer maxInFlightRequest (producer) The maximum number of unacknowledged requests the client will send on a single connection before blocking. Note that if this setting is set to be greater than 1 and there are failed sends, there is a risk of message re-ordering due to retries (i.e., if retries are enabled). 5 Integer maxRequestSize (producer) The maximum size of a request. This is also effectively a cap on the maximum record size. Note that the server has its own cap on record size which may be different from this. This setting will limit the number of record batches the producer will send in a single request to avoid sending huge requests. 1048576 Integer metadataMaxAgeMs (producer) The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions. 300000 Integer metricReporters (producer) A list of classes to use as metrics reporters. Implementing the MetricReporter interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics. String metricsSampleWindowMs (producer) The number of samples maintained to compute metrics. 30000 Integer noOfMetricsSample (producer) The number of samples maintained to compute metrics. 2 Integer partitioner (producer) The partitioner class for partitioning messages amongst sub-topics. The default partitioner is based on the hash of the key. org.apache.kafka.clients.producer.internals.DefaultPartitioner String partitionKey (producer) The partition to which the record will be sent (or null if no partition was specified). If this option has been configured then it take precedence over header KafkaConstants#PARTITION_KEY Integer producerBatchSize (producer) The producer will attempt to batch records together into fewer requests whenever multiple records are being sent to the same partition. This helps performance on both the client and the server. This configuration controls the default batch size in bytes. No attempt will be made to batch records larger than this size.Requests sent to brokers will contain multiple batches, one for each partition with data available to be sent.A small batch size will make batching less common and may reduce throughput (a batch size of zero will disable batching entirely). A very large batch size may use memory a bit more wastefully as we will always allocate a buffer of the specified batch size in anticipation of additional records. 16384 Integer queueBufferingMaxMessages (producer) The maximum number of unsent messages that can be queued up the producer when using async mode before either the producer must be blocked or data must be dropped. 10000 Integer receiveBufferBytes (producer) The size of the TCP receive buffer (SO_RCVBUF) to use when reading data. 65536 Integer reconnectBackoffMs (producer) The amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all requests sent by the consumer to the broker. 50 Integer recordMetadata (producer) Whether the producer should store the RecordMetadata results from sending to Kafka. The results are stored in a List containing the RecordMetadata metadata's. The list is stored on a header with the key KafkaConstants#KAFKA_RECORDMETA true boolean requestRequiredAcks (producer) The number of acknowledgments the producer requires the leader to have received before considering a request complete. This controls the durability of records that are sent. The following settings are common: acks=0 If set to zero then the producer will not wait for any acknowledgment from the server at all. The record will be immediately added to the socket buffer and considered sent. No guarantee can be made that the server has received the record in this case, and the retries configuration will not take effect (as the client won't generally know of any failures). The offset given back for each record will always be set to -1. acks=1 This will mean the leader will write the record to its local log but will respond without awaiting full acknowledgement from all followers. In this case should the leader fail immediately after acknowledging the record but before the followers have replicated it then the record will be lost. acks=all This means the leader will wait for the full set of in-sync replicas to acknowledge the record. This guarantees that the record will not be lost as long as at least one in-sync replica remains alive. This is the strongest available guarantee. 1 String requestTimeoutMs (producer) The amount of time the broker will wait trying to meet the request.required.acks requirement before sending back an error to the client. 305000 Integer retries (producer) Setting a value greater than zero will cause the client to resend any record whose send fails with a potentially transient error. Note that this retry is no different than if the client resent the record upon receiving the error. Allowing retries will potentially change the ordering of records because if two records are sent to a single partition, and the first fails and is retried but the second succeeds, then the second record may appear first. 0 Integer retryBackoffMs (producer) Before each retry, the producer refreshes the metadata of relevant topics to see if a new leader has been elected. Since leader election takes a bit of time, this property specifies the amount of time that the producer waits before refreshing the metadata. 100 Integer sendBufferBytes (producer) Socket write buffer size 131072 Integer serializerClass (producer) The serializer class for messages. org.apache.kafka.common.serialization.StringSerializer String workerPool (producer) To use a custom worker pool for continue routing Exchange after kafka server has acknowledge the message that was sent to it from KafkaProducer using asynchronous non-blocking processing. ExecutorService workerPoolCoreSize (producer) Number of core threads for the worker pool for continue routing Exchange after kafka server has acknowledge the message that was sent to it from KafkaProducer using asynchronous non-blocking processing. 10 Integer workerPoolMaxSize (producer) Maximum number of threads for the worker pool for continue routing Exchange after kafka server has acknowledge the message that was sent to it from KafkaProducer using asynchronous non-blocking processing. 20 Integer synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean interceptorClasses (monitoring) Sets interceptors for producer or consumers. Producer interceptors have to be classes implementing org.apache.kafka.clients.producer.ProducerInterceptor Consumer interceptors have to be classes implementing org.apache.kafka.clients.consumer.ConsumerInterceptor Note that if you use Producer interceptor on a consumer it will throw a class cast exception in runtime String kerberosBeforeReloginMin Time (security) Login thread sleep time between refresh attempts. 60000 Integer kerberosInitCmd (security) Kerberos kinit command path. Default is /usr/bin/kinit /usr/bin/kinit String kerberosPrincipalToLocal Rules (security) A list of rules for mapping from principal names to short names (typically operating system usernames). The rules are evaluated in order and the first rule that matches a principal name is used to map it to a short name. Any later rules in the list are ignored. By default, principal names of the form username/hostnameREALM are mapped to username. For more details on the format please see security authorization and acls. Multiple values can be separated by comma DEFAULT String kerberosRenewJitter (security) Percentage of random jitter added to the renewal time. 0.05 Double kerberosRenewWindowFactor (security) Login thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which time it will try to renew the ticket. 0.8 Double saslJaasConfig (security) Expose the kafka sasl.jaas.config parameter Example: org.apache.kafka.common.security.plain.PlainLoginModule required username=USERNAME password=PASSWORD; String saslKerberosServiceName (security) The Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config. String saslMechanism (security) The Simple Authentication and Security Layer (SASL) Mechanism used. For the valid values see http://www.iana.org/assignments/sasl-mechanisms/sasl-mechanisms.xhtml GSSAPI String securityProtocol (security) Protocol used to communicate with brokers. Currently only PLAINTEXT and SSL are supported. PLAINTEXT String sslCipherSuites (security) A list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol.By default all the available cipher suites are supported. String sslContextParameters (security) SSL configuration using a Camel SSLContextParameters object. If configured it's applied before the other SSL endpoint parameters. SSLContextParameters sslEnabledProtocols (security) The list of protocols enabled for SSL connections. TLSv1.2, TLSv1.1 and TLSv1 are enabled by default. TLSv1.2,TLSv1.1,TLSv1 String sslEndpointAlgorithm (security) The endpoint identification algorithm to validate server hostname using server certificate. String sslKeymanagerAlgorithm (security) The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for the Java Virtual Machine. SunX509 String sslKeyPassword (security) The password of the private key in the key store file. This is optional for client. String sslKeystoreLocation (security) The location of the key store file. This is optional for client and can be used for two-way authentication for client. String sslKeystorePassword (security) The store password for the key store file.This is optional for client and only needed if ssl.keystore.location is configured. String sslKeystoreType (security) The file format of the key store file. This is optional for client. Default value is JKS JKS String sslProtocol (security) The SSL protocol used to generate the SSLContext. Default setting is TLS, which is fine for most cases. Allowed values in recent JVMs are TLS, TLSv1.1 and TLSv1.2. SSL, SSLv2 and SSLv3 may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities. TLS String sslProvider (security) The name of the security provider used for SSL connections. Default value is the default security provider of the JVM. String sslTrustmanagerAlgorithm (security) The algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured for the Java Virtual Machine. PKIX String sslTruststoreLocation (security) The location of the trust store file. String sslTruststorePassword (security) The password for the trust store file. String sslTruststoreType (security) The file format of the trust store file. Default value is JKS. JKS String 191.3. Spring Boot Auto-Configuration The component supports 99 options, which are listed below. Name Description Default Type camel.component.kafka.allow-manual-commit Whether to allow doing manual commits via KafkaManualCommit. If this option is enabled then an instance of KafkaManualCommit is stored on the Exchange message header, which allows end users to access this API and perform manual offset commits via the Kafka consumer. false Boolean camel.component.kafka.break-on-first-error This options controls what happens when a consumer is processing an exchange and it fails. If the option is false then the consumer continues to the message and processes it. If the option is true then the consumer breaks out, and will seek back to offset of the message that caused a failure, and then re-attempt to process this message. However this can lead to endless processing of the same message if its bound to fail every time, eg a poison message. Therefore its recommended to deal with that for example by using Camel's error handler. false Boolean camel.component.kafka.brokers URL of the Kafka brokers to use. The format is host1:port1,host2:port2, and the list can be a subset of brokers or a VIP pointing to a subset of brokers. This option is known as bootstrap.servers in the Kafka documentation. String camel.component.kafka.configuration.allow-manual-commit Whether to allow doing manual commits via KafkaManualCommit. If this option is enabled then an instance of KafkaManualCommit is stored on the Exchange message header, which allows end users to access this API and perform manual offset commits via the Kafka consumer. false Boolean camel.component.kafka.configuration.auto-commit-enable If true, periodically commit to ZooKeeper the offset of messages already fetched by the consumer. This committed offset will be used when the process fails as the position from which the new consumer will begin. true Boolean camel.component.kafka.configuration.auto-commit-interval-ms The frequency in ms that the consumer offsets are committed to zookeeper. 5000 Integer camel.component.kafka.configuration.auto-commit-on-stop Whether to perform an explicit auto commit when the consumer stops to ensure the broker has a commit from the last consumed message. This requires the option autoCommitEnable is turned on. The possible values are: sync, async, or none. And sync is the default value. sync String camel.component.kafka.configuration.auto-offset-reset What to do when there is no initial offset in ZooKeeper or if an offset is out of range: earliest : automatically reset the offset to the earliest offset latest : automatically reset the offset to the latest offset fail: throw exception to the consumer latest String camel.component.kafka.configuration.break-on-first-error This options controls what happens when a consumer is processing an exchange and it fails. If the option is false then the consumer continues to the message and processes it. If the option is true then the consumer breaks out, and will seek back to offset of the message that caused a failure, and then re-attempt to process this message. However this can lead to endless processing of the same message if its bound to fail every time, eg a poison message. Therefore its recommended to deal with that for example by using Camel's error handler. false Boolean camel.component.kafka.configuration.bridge-endpoint If the option is true, then KafkaProducer will ignore the KafkaConstants.TOPIC header setting of the inbound message. false Boolean camel.component.kafka.configuration.brokers URL of the Kafka brokers to use. The format is host1:port1,host2:port2, and the list can be a subset of brokers or a VIP pointing to a subset of brokers. This option is known as bootstrap.servers in the Kafka documentation. String camel.component.kafka.configuration.buffer-memory-size The total bytes of memory the producer can use to buffer records waiting to be sent to the server. If records are sent faster than they can be delivered to the server the producer will either block or throw an exception based on the preference specified by block.on.buffer.full.This setting should correspond roughly to the total memory the producer will use, but is not a hard bound since not all memory the producer uses is used for buffering. Some additional memory will be used for compression (if compression is enabled) as well as for maintaining in-flight requests. 33554432 Integer camel.component.kafka.configuration.check-crcs Automatically check the CRC32 of the records consumed. This ensures no on-the-wire or on-disk corruption to the messages occurred. This check adds some overhead, so it may be disabled in cases seeking extreme performance. true Boolean camel.component.kafka.configuration.circular-topic-detection If the option is true, then KafkaProducer will detect if the message is attempted to be sent back to the same topic it may come from, if the message was original from a kafka consumer. If the KafkaConstants.TOPIC header is the same as the original kafka consumer topic, then the header setting is ignored, and the topic of the producer endpoint is used. In other words this avoids sending the same message back to where it came from. This option is not in use if the option bridgeEndpoint is set to true. true Boolean camel.component.kafka.configuration.client-id The client id is a user-specified string sent in each request to help trace calls. It should logically identify the application making the request. String camel.component.kafka.configuration.compression-codec This parameter allows you to specify the compression codec for all data generated by this producer. Valid values are none, gzip and snappy. none String camel.component.kafka.configuration.connection-max-idle-ms Close idle connections after the number of milliseconds specified by this config. 540000 Integer camel.component.kafka.configuration.consumer-request-timeout-ms The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted. 40000 Integer camel.component.kafka.configuration.consumer-streams Number of concurrent consumers on the consumer 10 Integer camel.component.kafka.configuration.consumers-count The number of consumers that connect to kafka server 1 Integer camel.component.kafka.configuration.enable-idempotence If set to 'true' the producer will ensure that exactly one copy of each message is written in the stream. If 'false', producer retries may write duplicates of the retried message in the stream. If set to true this option will require max.in.flight.requests.per.connection to be set to 1 and retries cannot be zero and additionally acks must be set to 'all'. false Boolean camel.component.kafka.configuration.fetch-max-bytes The maximum amount of data the server should return for a fetch request This is not an absolute maximum, if the first message in the first non-empty partition of the fetch is larger than this value, the message will still be returned to ensure that the consumer can make progress. The maximum message size accepted by the broker is defined via message.max.bytes (broker config) or max.message.bytes (topic config). Note that the consumer performs multiple fetches in parallel. 52428800 Integer camel.component.kafka.configuration.fetch-min-bytes The minimum amount of data the server should return for a fetch request. If insufficient data is available the request will wait for that much data to accumulate before answering the request. 1 Integer camel.component.kafka.configuration.fetch-wait-max-ms The maximum amount of time the server will block before answering the fetch request if there isn't sufficient data to immediately satisfy fetch.min.bytes 500 Integer camel.component.kafka.configuration.group-id A string that uniquely identifies the group of consumer processes to which this consumer belongs. By setting the same group id multiple processes indicate that they are all part of the same consumer group. This option is required for consumers. String camel.component.kafka.configuration.header-filter-strategy To use a custom HeaderFilterStrategy to filter header to and from Camel message. HeaderFilterStrategy camel.component.kafka.configuration.heartbeat-interval-ms The expected time between heartbeats to the consumer coordinator when using Kafka's group management facilities. Heartbeats are used to ensure that the consumer's session stays active and to facilitate rebalancing when new consumers join or leave the group. The value must be set lower than session.timeout.ms, but typically should be set no higher than 1/3 of that value. It can be adjusted even lower to control the expected time for normal rebalances. 3000 Integer camel.component.kafka.configuration.interceptor-classes Sets interceptors for producer or consumers. Producer interceptors have to be classes implementing org.apache.kafka.clients.producer.ProducerInterceptor Consumer interceptors have to be classes implementing org.apache.kafka.clients.consumer.ConsumerInterceptor Note that if you use Producer interceptor on a consumer it will throw a class cast exception in runtime String camel.component.kafka.configuration.kafka-header-deserializer Sets custom KafkaHeaderDeserializer for deserialization kafka headers values to camel headers values. KafkaHeaderDeserializer camel.component.kafka.configuration.kafka-header-serializer Sets custom KafkaHeaderDeserializer for serialization camel headers values to kafka headers values. KafkaHeaderSerializer camel.component.kafka.configuration.kerberos-before-relogin-min-time Login thread sleep time between refresh attempts. 60000 Integer camel.component.kafka.configuration.kerberos-init-cmd Kerberos kinit command path. Default is /usr/bin/kinit /usr/bin/kinit String camel.component.kafka.configuration.kerberos-principal-to-local-rules A list of rules for mapping from principal names to short names (typically operating system usernames). The rules are evaluated in order and the first rule that matches a principal name is used to map it to a short name. Any later rules in the list are ignored. By default, principal names of the form username/hostnameREALM are mapped to username. For more details on the format please see security authorization and acls. Multiple values can be separated by comma DEFAULT String camel.component.kafka.configuration.kerberos-renew-jitter Percentage of random jitter added to the renewal time. Double camel.component.kafka.configuration.kerberos-renew-window-factor Login thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which time it will try to renew the ticket. Double camel.component.kafka.configuration.key The record key (or null if no key is specified). If this option has been configured then it take precedence over header KafkaConstants#KEY String camel.component.kafka.configuration.key-deserializer Deserializer class for key that implements the Deserializer interface. org.apache.kafka.common.serialization.StringDeserializer String camel.component.kafka.configuration.key-serializer-class The serializer class for keys (defaults to the same as for messages if nothing is given). org.apache.kafka.common.serialization.StringSerializer String camel.component.kafka.configuration.linger-ms The producer groups together any records that arrive in between request transmissions into a single batched request. Normally this occurs only under load when records arrive faster than they can be sent out. However in some circumstances the client may want to reduce the number of requests even under moderate load. This setting accomplishes this by adding a small amount of artificial delaythat is, rather than immediately sending out a record the producer will wait for up to the given delay to allow other records to be sent so that the sends can be batched together. This can be thought of as analogous to Nagle's algorithm in TCP. This setting gives the upper bound on the delay for batching: once we get batch.size worth of records for a partition it will be sent immediately regardless of this setting, however if we have fewer than this many bytes accumulated for this partition we will 'linger' for the specified time waiting for more records to show up. This setting defaults to 0 (i.e. no delay). Setting linger.ms=5, for example, would have the effect of reducing the number of requests sent but would add up to 5ms of latency to records sent in the absense of load. 0 Integer camel.component.kafka.configuration.max-block-ms The configuration controls how long sending to kafka will block. These methods can be blocked for multiple reasons. For e.g: buffer full, metadata unavailable.This configuration imposes maximum limit on the total time spent in fetching metadata, serialization of key and value, partitioning and allocation of buffer memory when doing a send(). In case of partitionsFor(), this configuration imposes a maximum time threshold on waiting for metadata 60000 Integer camel.component.kafka.configuration.max-in-flight-request The maximum number of unacknowledged requests the client will send on a single connection before blocking. Note that if this setting is set to be greater than 1 and there are failed sends, there is a risk of message re-ordering due to retries (i.e., if retries are enabled). 5 Integer camel.component.kafka.configuration.max-partition-fetch-bytes The maximum amount of data per-partition the server will return. The maximum total memory used for a request will be #partitions max.partition.fetch.bytes. This size must be at least as large as the maximum message size the server allows or else it is possible for the producer to send messages larger than the consumer can fetch. If that happens, the consumer can get stuck trying to fetch a large message on a certain partition. 1048576 Integer camel.component.kafka.configuration.max-poll-interval-ms The maximum delay between invocations of poll() when using consumer group management. This places an upper bound on the amount of time that the consumer can be idle before fetching more records. If poll() is not called before expiration of this timeout, then the consumer is considered failed and the group will rebalance in order to reassign the partitions to another member. Long camel.component.kafka.configuration.max-poll-records The maximum number of records returned in a single call to poll() 500 Integer camel.component.kafka.configuration.max-request-size The maximum size of a request. This is also effectively a cap on the maximum record size. Note that the server has its own cap on record size which may be different from this. This setting will limit the number of record batches the producer will send in a single request to avoid sending huge requests. 1048576 Integer camel.component.kafka.configuration.metadata-max-age-ms The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions. 300000 Integer camel.component.kafka.configuration.metric-reporters A list of classes to use as metrics reporters. Implementing the MetricReporter interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics. String camel.component.kafka.configuration.metrics-sample-window-ms The number of samples maintained to compute metrics. 30000 Integer camel.component.kafka.configuration.no-of-metrics-sample The number of samples maintained to compute metrics. 2 Integer camel.component.kafka.configuration.offset-repository The offset repository to use in order to locally store the offset of each partition of the topic. Defining one will disable the autocommit. StateRepository camel.component.kafka.configuration.partition-assignor The class name of the partition assignment strategy that the client will use to distribute partition ownership amongst consumer instances when group management is used org.apache.kafka.clients.consumer.RangeAssignor String camel.component.kafka.configuration.partition-key The partition to which the record will be sent (or null if no partition was specified). If this option has been configured then it take precedence over header KafkaConstants#PARTITION_KEY Integer camel.component.kafka.configuration.partitioner The partitioner class for partitioning messages amongst sub-topics. The default partitioner is based on the hash of the key. org.apache.kafka.clients.producer.internals.DefaultPartitioner String camel.component.kafka.configuration.poll-timeout-ms The timeout used when polling the KafkaConsumer. 5000 Long camel.component.kafka.configuration.producer-batch-size The producer will attempt to batch records together into fewer requests whenever multiple records are being sent to the same partition. This helps performance on both the client and the server. This configuration controls the default batch size in bytes. No attempt will be made to batch records larger than this size.Requests sent to brokers will contain multiple batches, one for each partition with data available to be sent.A small batch size will make batching less common and may reduce throughput (a batch size of zero will disable batching entirely). A very large batch size may use memory a bit more wastefully as we will always allocate a buffer of the specified batch size in anticipation of additional records. 16384 Integer camel.component.kafka.configuration.queue-buffering-max-messages The maximum number of unsent messages that can be queued up the producer when using async mode before either the producer must be blocked or data must be dropped. 10000 Integer camel.component.kafka.configuration.receive-buffer-bytes The size of the TCP receive buffer (SO_RCVBUF) to use when reading data. 65536 Integer camel.component.kafka.configuration.reconnect-backoff-max-ms The maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect. If provided, the backoff per host will increase exponentially for each consecutive connection failure, up to this maximum. After calculating the backoff increase, 20% random jitter is added to avoid connection storms. 1000 Integer camel.component.kafka.configuration.reconnect-backoff-ms The amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all requests sent by the consumer to the broker. 50 Integer camel.component.kafka.configuration.record-metadata Whether the producer should store the RecordMetadata results from sending to Kafka. The results are stored in a List containing the RecordMetadata metadata's. The list is stored on a header with the key KafkaConstants#KAFKA_RECORDMETA true Boolean camel.component.kafka.configuration.request-required-acks The number of acknowledgments the producer requires the leader to have received before considering a request complete. This controls the durability of records that are sent. The following settings are common: acks=0 If set to zero then the producer will not wait for any acknowledgment from the server at all. The record will be immediately added to the socket buffer and considered sent. No guarantee can be made that the server has received the record in this case, and the retries configuration will not take effect (as the client won't generally know of any failures). The offset given back for each record will always be set to -1. acks=1 This will mean the leader will write the record to its local log but will respond without awaiting full acknowledgement from all followers. In this case should the leader fail immediately after acknowledging the record but before the followers have replicated it then the record will be lost. acks=all This means the leader will wait for the full set of in-sync replicas to acknowledge the record. This guarantees that the record will not be lost as long as at least one in-sync replica remains alive. This is the strongest available guarantee. 1 String camel.component.kafka.configuration.request-timeout-ms The amount of time the broker will wait trying to meet the request.required.acks requirement before sending back an error to the client. 305000 Integer camel.component.kafka.configuration.retries Setting a value greater than zero will cause the client to resend any record whose send fails with a potentially transient error. Note that this retry is no different than if the client resent the record upon receiving the error. Allowing retries will potentially change the ordering of records because if two records are sent to a single partition, and the first fails and is retried but the second succeeds, then the second record may appear first. 0 Integer camel.component.kafka.configuration.retry-backoff-ms Before each retry, the producer refreshes the metadata of relevant topics to see if a new leader has been elected. Since leader election takes a bit of time, this property specifies the amount of time that the producer waits before refreshing the metadata. 100 Integer camel.component.kafka.configuration.sasl-jaas-config Expose the kafka sasl.jaas.config parameter Example: org.apache.kafka.common.security.plain.PlainLoginModule required username=USERNAME password=PASSWORD; String camel.component.kafka.configuration.sasl-kerberos-service-name The Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config. String camel.component.kafka.configuration.sasl-mechanism The Simple Authentication and Security Layer (SASL) Mechanism used. For the valid values see http://www.iana.org/assignments/sasl-mechanisms/sasl-mechanisms.xhtml GSSAPI String camel.component.kafka.configuration.security-protocol Protocol used to communicate with brokers. Currently only PLAINTEXT and SSL are supported. PLAINTEXT String camel.component.kafka.configuration.seek-to Set if KafkaConsumer will read from beginning or end on startup: beginning : read from beginning end : read from end This is replacing the earlier property seekToBeginning String camel.component.kafka.configuration.send-buffer-bytes Socket write buffer size 131072 Integer camel.component.kafka.configuration.serializer-class The serializer class for messages. org.apache.kafka.common.serialization.StringSerializer String camel.component.kafka.configuration.session-timeout-ms The timeout used to detect failures when using Kafka's group management facilities. 10000 Integer camel.component.kafka.configuration.ssl-cipher-suites A list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol.By default all the available cipher suites are supported. String camel.component.kafka.configuration.ssl-context-parameters SSL configuration using a Camel SSLContextParameters object. If configured it's applied before the other SSL endpoint parameters. SSLContextParameters camel.component.kafka.configuration.ssl-enabled-protocols The list of protocols enabled for SSL connections. TLSv1.2, TLSv1.1 and TLSv1 are enabled by default. TLSv1.2,TLSv1.1,TLSv1 String camel.component.kafka.configuration.ssl-endpoint-algorithm The endpoint identification algorithm to validate server hostname using server certificate. String camel.component.kafka.configuration.ssl-key-password The password of the private key in the key store file. This is optional for client. String camel.component.kafka.configuration.ssl-keymanager-algorithm The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for the Java Virtual Machine. SunX509 String camel.component.kafka.configuration.ssl-keystore-location The location of the key store file. This is optional for client and can be used for two-way authentication for client. String camel.component.kafka.configuration.ssl-keystore-password The store password for the key store file.This is optional for client and only needed if ssl.keystore.location is configured. String camel.component.kafka.configuration.ssl-keystore-type The file format of the key store file. This is optional for client. Default value is JKS JKS String camel.component.kafka.configuration.ssl-protocol The SSL protocol used to generate the SSLContext. Default setting is TLS, which is fine for most cases. Allowed values in recent JVMs are TLS, TLSv1.1 and TLSv1.2. SSL, SSLv2 and SSLv3 may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities. TLS String camel.component.kafka.configuration.ssl-provider The name of the security provider used for SSL connections. Default value is the default security provider of the JVM. String camel.component.kafka.configuration.ssl-trustmanager-algorithm The algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured for the Java Virtual Machine. PKIX String camel.component.kafka.configuration.ssl-truststore-location The location of the trust store file. String camel.component.kafka.configuration.ssl-truststore-password The password for the trust store file. String camel.component.kafka.configuration.ssl-truststore-type The file format of the trust store file. Default value is JKS. JKS String camel.component.kafka.configuration.topic Name of the topic to use. On the consumer you can use comma to separate multiple topics. A producer can only send a message to a single topic. String camel.component.kafka.configuration.topic-is-pattern Whether the topic is a pattern (regular expression). This can be used to subscribe to dynamic number of topics matching the pattern. false Boolean camel.component.kafka.configuration.value-deserializer Deserializer class for value that implements the Deserializer interface. org.apache.kafka.common.serialization.StringDeserializer String camel.component.kafka.configuration.worker-pool To use a custom worker pool for continue routing Exchange after kafka server has acknowledge the message that was sent to it from KafkaProducer using asynchronous non-blocking processing. ExecutorService camel.component.kafka.configuration.worker-pool-core-size Number of core threads for the worker pool for continue routing Exchange after kafka server has acknowledge the message that was sent to it from KafkaProducer using asynchronous non-blocking processing. 10 Integer camel.component.kafka.configuration.worker-pool-max-size Maximum number of threads for the worker pool for continue routing Exchange after kafka server has acknowledge the message that was sent to it from KafkaProducer using asynchronous non-blocking processing. 20 Integer camel.component.kafka.enabled Enable kafka component true Boolean camel.component.kafka.kafka-manual-commit-factory Factory to use for creating KafkaManualCommit instances. This allows to plugin a custom factory to create custom KafkaManualCommit instances in case special logic is needed when doing manual commits that deviates from the default implementation that comes out of the box. The option is a org.apache.camel.component.kafka.KafkaManualCommitFactory type. String camel.component.kafka.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean camel.component.kafka.shutdown-timeout Timeout in milliseconds to wait gracefully for the consumer or producer to shutdown and terminate its worker threads. 30000 Integer camel.component.kafka.use-global-ssl-context-parameters Enable usage of global SSL context parameters. false Boolean camel.component.kafka.worker-pool To use a shared custom worker pool for continue routing Exchange after kafka server has acknowledge the message that was sent to it from KafkaProducer using asynchronous non-blocking processing. If using this option then you must handle the lifecycle of the thread pool to shut the pool down when no longer needed. The option is a java.util.concurrent.ExecutorService type. String For more information about Producer/Consumer configuration: http://kafka.apache.org/documentation.html#newconsumerconfigs http://kafka.apache.org/documentation.html#producerconfigs 191.4. Message headers 191.4.1. Consumer headers The following headers are available when consuming messages from Kafka. Header constant Header value Type Description KafkaConstants.TOPIC "kafka.TOPIC" String The topic from where the message originated KafkaConstants.PARTITION "kafka.PARTITION" Integer The partition where the message was stored KafkaConstants.OFFSET "kafka.OFFSET" Long The offset of the message KafkaConstants.KEY "kafka.KEY" Object The key of the message if configured KafkaConstants.HEADERS "kafka.HEADERS" org.apache.kafka.common.header.Headers The record headers KafkaConstants.LAST_RECORD_BEFORE_COMMIT "kafka.LAST_RECORD_BEFORE_COMMIT" Boolean Whether or not it's the last record before commit (only available if autoCommitEnable endpoint parameter is false ) KafkaConstants.MANUAL_COMMIT "CamelKafkaManualCommit" KafkaManualCommit Can be used for forcing manual offset commit when using Kafka consumer. 191.4.2. Producer headers Before sending a message to Kafka you can configure the following headers. Header constant Header value Type Description KafkaConstants.KEY "kafka.KEY" Object Required The key of the message in order to ensure that all related message goes in the same partition KafkaConstants.TOPIC "kafka.TOPIC" String The topic to which send the message (only read if the bridgeEndpoint endpoint parameter is true ) KafkaConstants.PARTITION_KEY "kafka.PARTITION_KEY" Integer Explicitly specify the partition (only used if the KafkaConstants.KEY header is defined) After the message is sent to Kafka, the following headers are available Header constant Header value Type Description KafkaConstants.KAFKA_RECORDMETA "org.apache.kafka.clients.producer.RecordMetadata" List<RecordMetadata> The metadata (only configured if recordMetadata endpoint parameter is true 191.5. Samples 191.5.1. Consuming messages from Kafka Here is the minimal route you need in order to read messages from Kafka. from("kafka:test?brokers=localhost:9092") .log("Message received from Kafka : USD{body}") .log(" on the topic USD{headers[kafka.TOPIC]}") .log(" on the partition USD{headers[kafka.PARTITION]}") .log(" with the offset USD{headers[kafka.OFFSET]}") .log(" with the key USD{headers[kafka.KEY]}") If you need to consume messages from multiple topics you can use a comma separated list of topic names from("kafka:test,test1,test2?brokers=localhost:9092") .log("Message received from Kafka : USD{body}") .log(" on the topic USD{headers[kafka.TOPIC]}") .log(" on the partition USD{headers[kafka.PARTITION]}") .log(" with the offset USD{headers[kafka.OFFSET]}") .log(" with the key USD{headers[kafka.KEY]}") When consuming messages from Kafka you can use your own offset management and not delegate this management to Kafka. In order to keep the offsets the component needs a StateRepository implementation such as FileStateRepository . This bean should be available in the registry. Here how to use it : // Create the repository in which the Kafka offsets will be persisted FileStateRepository repository = FileStateRepository.fileStateRepository(new File("/path/to/repo.dat")); // Bind this repository into the Camel registry JndiRegistry registry = new JndiRegistry(); registry.bind("offsetRepo", repository); // Configure the camel context DefaultCamelContext camelContext = new DefaultCamelContext(registry); camelContext.addRoutes(new RouteBuilder() { @Override public void configure() throws Exception { from("kafka:" + TOPIC + "?brokers=localhost:{{kafkaPort}}" + // Setup the topic and broker address "&groupId=A" + // The consumer processor group ID "&autoOffsetReset=earliest" + // Ask to start from the beginning if we have unknown offset "&offsetRepository=#offsetRepo") // Keep the offsets in the previously configured repository .to("mock:result"); } }); 191.5.2. Producing messages to Kafka Here is the minimal route you need in order to write messages to Kafka. from("direct:start") .setBody(constant("Message from Camel")) // Message to send .setHeader(KafkaConstants.KEY, constant("Camel")) // Key of the message .to("kafka:test?brokers=localhost:9092"); 191.6. SSL configuration You have 2 different ways to configure the SSL communication on the Kafka` component. The first way is through the many SSL endpoint parameters from("kafka:" + TOPIC + "?brokers=localhost:{{kafkaPort}}" + "&groupId=A" + "&sslKeystoreLocation=/path/to/keystore.jks" + "&sslKeystorePassword=changeit" + "&sslKeyPassword=changeit" + "&securityProtocol=SSL") .to("mock:result"); The second way is to use the sslContextParameters endpoint parameter. // Configure the SSLContextParameters object KeyStoreParameters ksp = new KeyStoreParameters(); ksp.setResource("/path/to/keystore.jks"); ksp.setPassword("changeit"); KeyManagersParameters kmp = new KeyManagersParameters(); kmp.setKeyStore(ksp); kmp.setKeyPassword("changeit"); SSLContextParameters scp = new SSLContextParameters(); scp.setKeyManagers(kmp); // Bind this SSLContextParameters into the Camel registry JndiRegistry registry = new JndiRegistry(); registry.bind("ssl", scp); // Configure the camel context DefaultCamelContext camelContext = new DefaultCamelContext(registry); camelContext.addRoutes(new RouteBuilder() { @Override public void configure() throws Exception { from("kafka:" + TOPIC + "?brokers=localhost:{{kafkaPort}}" + // Setup the topic and broker address "&groupId=A" + // The consumer processor group ID "&sslContextParameters=#ssl" + // The security protocol "&securityProtocol=SSL) // Reference the SSL configuration .to("mock:result"); } }); 191.7. Using the Kafka idempotent repository Available from Camel 2.19 The camel-kafka library provides a Kafka topic-based idempotent repository. This repository stores broadcasts all changes to idempotent state (add/remove) in a Kafka topic, and populates a local in-memory cache for each repository's process instance through event sourcing. The topic used must be unique per idempotent repository instance. The mechanism does not have any requirements about the number of topic partitions; as the repository consumes from all partitions at the same time. It also does not have any requirements about the replication factor of the topic. Each repository instance that uses the topic (e.g. typically on different machines running in parallel) controls its own consumer group, so in a cluster of 10 Camel processes using the same topic each will control its own offset. On startup, the instance subscribes to the topic and rewinds the offset to the beginning, rebuilding the cache to the latest state. The cache will not be considered warmed up until one poll of pollDurationMs in length returns 0 records. Startup will not be completed until either the cache has warmed up, or 30 seconds go by; if the latter happens the idempotent repository may be in an inconsistent state until its consumer catches up to the end of the topic. A KafkaIdempotentRepository has the following properties: Property Description topic The name of the Kafka topic to use to broadcast changes. (required) bootstrapServers The bootstrap.servers property on the internal Kafka producer and consumer. Use this as shorthand if not setting consumerConfig and producerConfig . If used, this component will apply sensible default configurations for the producer and consumer. producerConfig Sets the properties that will be used by the Kafka producer that broadcasts changes. Overrides bootstrapServers , so must define the Kafka bootstrap.servers property itself consumerConfig Sets the properties that will be used by the Kafka consumer that populates the cache from the topic. Overrides bootstrapServers , so must define the Kafka bootstrap.servers property itself maxCacheSize How many of the most recently used keys should be stored in memory (default 1000). pollDurationMs The poll duration of the Kafka consumer. The local caches are updated immediately. This value will affect how far behind other peers that update their caches from the topic are relative to the idempotent consumer instance that sent the cache action message. The default value of this is 100 ms. If setting this value explicitly, be aware that there is a tradeoff between the remote cache liveness and the volume of network traffic between this repository's consumer and the Kafka brokers. The cache warmup process also depends on there being one poll that fetches nothing - this indicates that the stream has been consumed up to the current point. If the poll duration is excessively long for the rate at which messages are sent on the topic, there exists a possibility that the cache cannot be warmed up and will operate in an inconsistent state relative to its peers until it catches up. The repository can be instantiated by defining the topic and bootstrapServers , or the producerConfig and consumerConfig property sets can be explicitly defined to enable features such as SSL/SASL. To use, this repository must be placed in the Camel registry, either manually or by registration as a bean in Spring/Blueprint, as it is CamelContext aware. Sample usage is as follows: KafkaIdempotentRepository kafkaIdempotentRepository = new KafkaIdempotentRepository("idempotent-db-inserts", "localhost:9091"); SimpleRegistry registry = new SimpleRegistry(); registry.put("insertDbIdemRepo", kafkaIdempotentRepository); // must be registered in the registry, to enable access to the CamelContext CamelContext context = new CamelContext(registry); // later in RouteBuilder... from("direct:performInsert") .idempotentConsumer(header("id")).messageIdRepositoryRef("insertDbIdemRepo") // once-only insert into database .end() In XML: <!-- simple --> <bean id="insertDbIdemRepo" class="org.apache.camel.processor.idempotent.kafka.KafkaIdempotentRepository"> <property name="topic" value="idempotent-db-inserts"/> <property name="bootstrapServers" value="localhost:9091"/> </bean> <!-- complex --> <bean id="insertDbIdemRepo" class="org.apache.camel.processor.idempotent.kafka.KafkaIdempotentRepository"> <property name="topic" value="idempotent-db-inserts"/> <property name="maxCacheSize" value="10000"/> <property name="consumerConfig"> <props> <prop key="bootstrap.servers">localhost:9091</prop> </props> </property> <property name="producerConfig"> <props> <prop key="bootstrap.servers">localhost:9091</prop> </props> </property> </bean> 191.8. Using manual commit with Kafka consumer Available as of Camel 2.21 By default the Kafka consumer will use auto commit, where the offset will be committed automatically in the background using a given interval. In case you want to force manual commits, you can use KafkaManualCommit API from the Camel Exchange, stored on the message header. This requires to turn on manual commits by either setting the option allowManualCommit to true on the KafkaComponent or on the endpoint, for example: KafkaComponent kafka = new KafkaComponent(); kafka.setAllowManualCommit(true); ... camelContext.addComponent("kafka", kafka); You can then use the KafkaManualCommit from Java code such as a Camel Processor : public void process(Exchange exchange) { KafkaManualCommit manual = exchange.getIn().getHeader(KafkaConstants.MANUAL_COMMIT, KafkaManualCommit.class); manual.commitSync(); } This will force a synchronous commit which will block until the commit is acknowledge on Kafka, or if it fails an exception is thrown. If you want to use a custom implementation of KafkaManualCommit then you can configure a custom KafkaManualCommitFactory on the KafkaComponent that creates instances of your custom implementation. 191.9. Kafka Headers propagation Available as of Camel 2.22 When consuming messages from Kafka, headers will be propagated to camel exchange headers automatically. Producing flow backed by same behaviour - camel headers of particular exchange will be propagated to kafka message headers. Since kafka headers allows only byte[] values, in order camel exchnage header to be propagated its value should be serialized to bytes[] , otherwise header will be skipped. Following header value types are supported: String , Integer , Long , Double , Boolean , byte[] . Note: all headers propagated from kafka to camel exchange will contain byte[] value by default. In order to override default functionality uri parameters can be set: kafkaHeaderDeserializer for from route and kafkaHeaderSerializer for to route. Example: By default all headers are being filtered by KafkaHeaderFilterStrategy . Strategy filters out headers which start with Camel or org.apache.camel prefixes. Default strategy can be overridden by using headerFilterStrategy uri parameter in both to and from routes: myStrategy object should be subclass of HeaderFilterStrategy and must be placed in the Camel registry, either manually or by registration as a bean in Spring/Blueprint, as it is CamelContext aware.
[ "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-kafka</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>", "kafka:topic[?options]", "kafka:topic", "from(\"kafka:test?brokers=localhost:9092\") .log(\"Message received from Kafka : USD{body}\") .log(\" on the topic USD{headers[kafka.TOPIC]}\") .log(\" on the partition USD{headers[kafka.PARTITION]}\") .log(\" with the offset USD{headers[kafka.OFFSET]}\") .log(\" with the key USD{headers[kafka.KEY]}\")", "from(\"kafka:test,test1,test2?brokers=localhost:9092\") .log(\"Message received from Kafka : USD{body}\") .log(\" on the topic USD{headers[kafka.TOPIC]}\") .log(\" on the partition USD{headers[kafka.PARTITION]}\") .log(\" with the offset USD{headers[kafka.OFFSET]}\") .log(\" with the key USD{headers[kafka.KEY]}\")", "// Create the repository in which the Kafka offsets will be persisted FileStateRepository repository = FileStateRepository.fileStateRepository(new File(\"/path/to/repo.dat\")); // Bind this repository into the Camel registry JndiRegistry registry = new JndiRegistry(); registry.bind(\"offsetRepo\", repository); // Configure the camel context DefaultCamelContext camelContext = new DefaultCamelContext(registry); camelContext.addRoutes(new RouteBuilder() { @Override public void configure() throws Exception { from(\"kafka:\" + TOPIC + \"?brokers=localhost:{{kafkaPort}}\" + // Setup the topic and broker address \"&groupId=A\" + // The consumer processor group ID \"&autoOffsetReset=earliest\" + // Ask to start from the beginning if we have unknown offset \"&offsetRepository=#offsetRepo\") // Keep the offsets in the previously configured repository .to(\"mock:result\"); } });", "from(\"direct:start\") .setBody(constant(\"Message from Camel\")) // Message to send .setHeader(KafkaConstants.KEY, constant(\"Camel\")) // Key of the message .to(\"kafka:test?brokers=localhost:9092\");", "from(\"kafka:\" + TOPIC + \"?brokers=localhost:{{kafkaPort}}\" + \"&groupId=A\" + \"&sslKeystoreLocation=/path/to/keystore.jks\" + \"&sslKeystorePassword=changeit\" + \"&sslKeyPassword=changeit\" + \"&securityProtocol=SSL\") .to(\"mock:result\");", "// Configure the SSLContextParameters object KeyStoreParameters ksp = new KeyStoreParameters(); ksp.setResource(\"/path/to/keystore.jks\"); ksp.setPassword(\"changeit\"); KeyManagersParameters kmp = new KeyManagersParameters(); kmp.setKeyStore(ksp); kmp.setKeyPassword(\"changeit\"); SSLContextParameters scp = new SSLContextParameters(); scp.setKeyManagers(kmp); // Bind this SSLContextParameters into the Camel registry JndiRegistry registry = new JndiRegistry(); registry.bind(\"ssl\", scp); // Configure the camel context DefaultCamelContext camelContext = new DefaultCamelContext(registry); camelContext.addRoutes(new RouteBuilder() { @Override public void configure() throws Exception { from(\"kafka:\" + TOPIC + \"?brokers=localhost:{{kafkaPort}}\" + // Setup the topic and broker address \"&groupId=A\" + // The consumer processor group ID \"&sslContextParameters=#ssl\" + // The security protocol \"&securityProtocol=SSL) // Reference the SSL configuration .to(\"mock:result\"); } });", "KafkaIdempotentRepository kafkaIdempotentRepository = new KafkaIdempotentRepository(\"idempotent-db-inserts\", \"localhost:9091\"); SimpleRegistry registry = new SimpleRegistry(); registry.put(\"insertDbIdemRepo\", kafkaIdempotentRepository); // must be registered in the registry, to enable access to the CamelContext CamelContext context = new CamelContext(registry); // later in RouteBuilder from(\"direct:performInsert\") .idempotentConsumer(header(\"id\")).messageIdRepositoryRef(\"insertDbIdemRepo\") // once-only insert into database .end()", "<!-- simple --> <bean id=\"insertDbIdemRepo\" class=\"org.apache.camel.processor.idempotent.kafka.KafkaIdempotentRepository\"> <property name=\"topic\" value=\"idempotent-db-inserts\"/> <property name=\"bootstrapServers\" value=\"localhost:9091\"/> </bean> <!-- complex --> <bean id=\"insertDbIdemRepo\" class=\"org.apache.camel.processor.idempotent.kafka.KafkaIdempotentRepository\"> <property name=\"topic\" value=\"idempotent-db-inserts\"/> <property name=\"maxCacheSize\" value=\"10000\"/> <property name=\"consumerConfig\"> <props> <prop key=\"bootstrap.servers\">localhost:9091</prop> </props> </property> <property name=\"producerConfig\"> <props> <prop key=\"bootstrap.servers\">localhost:9091</prop> </props> </property> </bean>", "KafkaComponent kafka = new KafkaComponent(); kafka.setAllowManualCommit(true); camelContext.addComponent(\"kafka\", kafka);", "public void process(Exchange exchange) { KafkaManualCommit manual = exchange.getIn().getHeader(KafkaConstants.MANUAL_COMMIT, KafkaManualCommit.class); manual.commitSync(); }", "from(\"kafka:my_topic?kafkaHeaderDeserializer=#myDeserializer\") .to(\"kafka:my_topic?kafkaHeaderSerializer=#mySerializer\")", "from(\"kafka:my_topic?headerFilterStrategy=#myStrategy\") .to(\"kafka:my_topic?headerFilterStrategy=#myStrategy\")" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/kafka-component
8.178. rgmanager
8.178. rgmanager 8.178.1. RHBA-2013:1600 - rgmanager bug fix update Updated rgmanager packages that fix two bugs are now available for Red Hat Enterprise Linux 6. The rgmanager package contains the Red Hat Resource Group Manager, which is used to create and manage high-availability server applications in the event of system downtime. Bug Fixes BZ# 862075 Previously, if the main rgmanager process died, either by an unexpected termination with a segmentation fault or by killing it manually, any service running on it, was immediately recovered on another node rather than waiting for fencing, like the rgmanager process did in versions. This was problematic for services containing Highly Available Logical Volume Manager (HA-LVM) resources using tagging, because the start operation failed if the tag that was found belonged to a node that was still a member of the cluster. With this update, service recovery is delayed until after the node is removed from the configuration and fenced, which allows the LVM resource to recover properly. BZ# 983296 Previously, attempts to start an MRG Messaging (MRG-M) broker caused rgmanager to terminate unexpectedly with a segmentation fault. This was caused by subtle memory corruption introduced by calling the pthread_mutex_unlock() function on a mutual exclusion that was not locked. This update addresses scenarios where memory could be corrupted when calling pthread_mutex_unlock(), and rgmanager no longer terminates unexpectedly in the described situation. Users of rgmanager are advised to upgrade to these updated packages, which fix these bugs.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/rgmanager
Chapter 2. Container security
Chapter 2. Container security 2.1. Understanding container security Securing a containerized application relies on multiple levels of security: Container security begins with a trusted base container image and continues through the container build process as it moves through your CI/CD pipeline. Important Image streams by default do not automatically update. This default behavior might create a security issue because security updates to images referenced by an image stream do not automatically occur. For information about how to override this default behavior, see Configuring periodic importing of imagestreamtags . When a container is deployed, its security depends on it running on secure operating systems and networks, and establishing firm boundaries between the container itself and the users and hosts that interact with it. Continued security relies on being able to scan container images for vulnerabilities and having an efficient way to correct and replace vulnerable images. Beyond what a platform such as OpenShift Container Platform offers out of the box, your organization will likely have its own security demands. Some level of compliance verification might be needed before you can even bring OpenShift Container Platform into your data center. Likewise, you may need to add your own agents, specialized hardware drivers, or encryption features to OpenShift Container Platform, before it can meet your organization's security standards. This guide provides a high-level walkthrough of the container security measures available in OpenShift Container Platform, including solutions for the host layer, the container and orchestration layer, and the build and application layer. It then points you to specific OpenShift Container Platform documentation to help you achieve those security measures. This guide contains the following information: Why container security is important and how it compares with existing security standards. Which container security measures are provided by the host (RHCOS and RHEL) layer and which are provided by OpenShift Container Platform. How to evaluate your container content and sources for vulnerabilities. How to design your build and deployment process to proactively check container content. How to control access to containers through authentication and authorization. How networking and attached storage are secured in OpenShift Container Platform. Containerized solutions for API management and SSO. The goal of this guide is to understand the incredible security benefits of using OpenShift Container Platform for your containerized workloads and how the entire Red Hat ecosystem plays a part in making and keeping containers secure. It will also help you understand how you can engage with the OpenShift Container Platform to achieve your organization's security goals. 2.1.1. What are containers? Containers package an application and all its dependencies into a single image that can be promoted from development, to test, to production, without change. A container might be part of a larger application that works closely with other containers. Containers provide consistency across environments and multiple deployment targets: physical servers, virtual machines (VMs), and private or public cloud. Some of the benefits of using containers include: Infrastructure Applications Sandboxed application processes on a shared Linux operating system kernel Package my application and all of its dependencies Simpler, lighter, and denser than virtual machines Deploy to any environment in seconds and enable CI/CD Portable across different environments Easily access and share containerized components See Understanding Linux containers from the Red Hat Customer Portal to find out more about Linux containers. To learn about RHEL container tools, see Building, running, and managing containers in the RHEL product documentation. 2.1.2. What is OpenShift Container Platform? Automating how containerized applications are deployed, run, and managed is the job of a platform such as OpenShift Container Platform. At its core, OpenShift Container Platform relies on the Kubernetes project to provide the engine for orchestrating containers across many nodes in scalable data centers. Kubernetes is a project, which can run using different operating systems and add-on components that offer no guarantees of supportability from the project. As a result, the security of different Kubernetes platforms can vary. OpenShift Container Platform is designed to lock down Kubernetes security and integrate the platform with a variety of extended components. To do this, OpenShift Container Platform draws on the extensive Red Hat ecosystem of open source technologies that include the operating systems, authentication, storage, networking, development tools, base container images, and many other components. OpenShift Container Platform can leverage Red Hat's experience in uncovering and rapidly deploying fixes for vulnerabilities in the platform itself as well as the containerized applications running on the platform. Red Hat's experience also extends to efficiently integrating new components with OpenShift Container Platform as they become available and adapting technologies to individual customer needs. Additional resources OpenShift Container Platform architecture OpenShift Security Guide 2.2. Understanding host and VM security Both containers and virtual machines provide ways of separating applications running on a host from the operating system itself. Understanding RHCOS, which is the operating system used by OpenShift Container Platform, will help you see how the host systems protect containers and hosts from each other. 2.2.1. Securing containers on Red Hat Enterprise Linux CoreOS (RHCOS) Containers simplify the act of deploying many applications to run on the same host, using the same kernel and container runtime to spin up each container. The applications can be owned by many users and, because they are kept separate, can run different, and even incompatible, versions of those applications at the same time without issue. In Linux, containers are just a special type of process, so securing containers is similar in many ways to securing any other running process. An environment for running containers starts with an operating system that can secure the host kernel from containers and other processes running on the host, as well as secure containers from each other. Because OpenShift Container Platform 4.14 runs on RHCOS hosts, with the option of using Red Hat Enterprise Linux (RHEL) as worker nodes, the following concepts apply by default to any deployed OpenShift Container Platform cluster. These RHEL security features are at the core of what makes running containers in OpenShift Container Platform more secure: Linux namespaces enable creating an abstraction of a particular global system resource to make it appear as a separate instance to processes within a namespace. Consequently, several containers can use the same computing resource simultaneously without creating a conflict. Container namespaces that are separate from the host by default include mount table, process table, network interface, user, control group, UTS, and IPC namespaces. Those containers that need direct access to host namespaces need to have elevated permissions to request that access. See Overview of Containers in Red Hat Systems from the RHEL 8 container documentation for details on the types of namespaces. SELinux provides an additional layer of security to keep containers isolated from each other and from the host. SELinux allows administrators to enforce mandatory access controls (MAC) for every user, application, process, and file. Warning Disabling SELinux on RHCOS is not supported. CGroups (control groups) limit, account for, and isolate the resource usage (CPU, memory, disk I/O, network, etc.) of a collection of processes. CGroups are used to ensure that containers on the same host are not impacted by each other. Secure computing mode (seccomp) profiles can be associated with a container to restrict available system calls. See page 94 of the OpenShift Security Guide for details about seccomp. Deploying containers using RHCOS reduces the attack surface by minimizing the host environment and tuning it for containers. The CRI-O container engine further reduces that attack surface by implementing only those features required by Kubernetes and OpenShift Container Platform to run and manage containers, as opposed to other container engines that implement desktop-oriented standalone features. RHCOS is a version of Red Hat Enterprise Linux (RHEL) that is specially configured to work as control plane (master) and worker nodes on OpenShift Container Platform clusters. So RHCOS is tuned to efficiently run container workloads, along with Kubernetes and OpenShift Container Platform services. To further protect RHCOS systems in OpenShift Container Platform clusters, most containers, except those managing or monitoring the host system itself, should run as a non-root user. Dropping the privilege level or creating containers with the least amount of privileges possible is recommended best practice for protecting your own OpenShift Container Platform clusters. Additional resources How nodes enforce resource constraints Managing security context constraints Supported platforms for OpenShift clusters Requirements for a cluster with user-provisioned infrastructure Choosing how to configure RHCOS Ignition Kernel arguments Kernel modules Disk encryption Chrony time service About the OpenShift Update Service FIPS cryptography 2.2.2. Comparing virtualization and containers Traditional virtualization provides another way to keep application environments separate on the same physical host. However, virtual machines work in a different way than containers. Virtualization relies on a hypervisor spinning up guest virtual machines (VMs), each of which has its own operating system (OS), represented by a running kernel, as well as the running application and its dependencies. With VMs, the hypervisor isolates the guests from each other and from the host kernel. Fewer individuals and processes have access to the hypervisor, reducing the attack surface on the physical server. That said, security must still be monitored: one guest VM might be able to use hypervisor bugs to gain access to another VM or the host kernel. And, when the OS needs to be patched, it must be patched on all guest VMs using that OS. Containers can be run inside guest VMs, and there might be use cases where this is desirable. For example, you might be deploying a traditional application in a container, perhaps to lift-and-shift an application to the cloud. Container separation on a single host, however, provides a more lightweight, flexible, and easier-to-scale deployment solution. This deployment model is particularly appropriate for cloud-native applications. Containers are generally much smaller than VMs and consume less memory and CPU. See Linux Containers Compared to KVM Virtualization in the RHEL 7 container documentation to learn about the differences between container and VMs. 2.2.3. Securing OpenShift Container Platform When you deploy OpenShift Container Platform, you have the choice of an installer-provisioned infrastructure (there are several available platforms) or your own user-provisioned infrastructure. Some low-level security-related configuration, such as enabling FIPS mode or adding kernel modules required at first boot, might benefit from a user-provisioned infrastructure. Likewise, user-provisioned infrastructure is appropriate for disconnected OpenShift Container Platform deployments. Keep in mind that, when it comes to making security enhancements and other configuration changes to OpenShift Container Platform, the goals should include: Keeping the underlying nodes as generic as possible. You want to be able to easily throw away and spin up similar nodes quickly and in prescriptive ways. Managing modifications to nodes through OpenShift Container Platform as much as possible, rather than making direct, one-off changes to the nodes. In pursuit of those goals, most node changes should be done during installation through Ignition or later using MachineConfigs that are applied to sets of nodes by the Machine Config Operator. Examples of security-related configuration changes you can do in this way include: Adding kernel arguments Adding kernel modules Enabling support for FIPS cryptography Configuring disk encryption Configuring the chrony time service Besides the Machine Config Operator, there are several other Operators available to configure OpenShift Container Platform infrastructure that are managed by the Cluster Version Operator (CVO). The CVO is able to automate many aspects of OpenShift Container Platform cluster updates. Additional resources FIPS cryptography 2.3. Hardening RHCOS RHCOS was created and tuned to be deployed in OpenShift Container Platform with few if any changes needed to RHCOS nodes. Every organization adopting OpenShift Container Platform has its own requirements for system hardening. As a RHEL system with OpenShift-specific modifications and features added (such as Ignition, ostree, and a read-only /usr to provide limited immutability), RHCOS can be hardened just as you would any RHEL system. Differences lie in the ways you manage the hardening. A key feature of OpenShift Container Platform and its Kubernetes engine is to be able to quickly scale applications and infrastructure up and down as needed. Unless it is unavoidable, you do not want to make direct changes to RHCOS by logging into a host and adding software or changing settings. You want to have the OpenShift Container Platform installer and control plane manage changes to RHCOS so new nodes can be spun up without manual intervention. So, if you are setting out to harden RHCOS nodes in OpenShift Container Platform to meet your security needs, you should consider both what to harden and how to go about doing that hardening. 2.3.1. Choosing what to harden in RHCOS The RHEL 9 Security Hardening guide describes how you should approach security for any RHEL system. Use this guide to learn how to approach cryptography, evaluate vulnerabilities, and assess threats to various services. Likewise, you can learn how to scan for compliance standards, check file integrity, perform auditing, and encrypt storage devices. With the knowledge of what features you want to harden, you can then decide how to harden them in RHCOS. 2.3.2. Choosing how to harden RHCOS Direct modification of RHCOS systems in OpenShift Container Platform is discouraged. Instead, you should think of modifying systems in pools of nodes, such as worker nodes and control plane nodes. When a new node is needed, in non-bare metal installs, you can request a new node of the type you want and it will be created from an RHCOS image plus the modifications you created earlier. There are opportunities for modifying RHCOS before installation, during installation, and after the cluster is up and running. 2.3.2.1. Hardening before installation For bare metal installations, you can add hardening features to RHCOS before beginning the OpenShift Container Platform installation. For example, you can add kernel options when you boot the RHCOS installer to turn security features on or off, such as various SELinux booleans or low-level settings, such as symmetric multithreading. Warning Disabling SELinux on RHCOS nodes is not supported. Although bare metal RHCOS installations are more difficult, they offer the opportunity of getting operating system changes in place before starting the OpenShift Container Platform installation. This can be important when you need to ensure that certain features, such as disk encryption or special networking settings, be set up at the earliest possible moment. 2.3.2.2. Hardening during installation You can interrupt the OpenShift Container Platform installation process and change Ignition configs. Through Ignition configs, you can add your own files and systemd services to the RHCOS nodes. You can also make some basic security-related changes to the install-config.yaml file used for installation. Contents added in this way are available at each node's first boot. 2.3.2.3. Hardening after the cluster is running After the OpenShift Container Platform cluster is up and running, there are several ways to apply hardening features to RHCOS: Daemon set: If you need a service to run on every node, you can add that service with a Kubernetes DaemonSet object . Machine config: MachineConfig objects contain a subset of Ignition configs in the same format. By applying machine configs to all worker or control plane nodes, you can ensure that the node of the same type that is added to the cluster has the same changes applied. All of the features noted here are described in the OpenShift Container Platform product documentation. Additional resources OpenShift Security Guide Choosing how to configure RHCOS Modifying Nodes Manually creating the installation configuration file Creating the Kubernetes manifest and Ignition config files Installing RHCOS by using an ISO image Customizing nodes Adding kernel arguments to Nodes Installation configuration parameters - see fips Support for FIPS cryptography RHEL core crypto components 2.4. Container image signatures Red Hat delivers signatures for the images in the Red Hat Container Registries. Those signatures can be automatically verified when being pulled to OpenShift Container Platform 4 clusters by using the Machine Config Operator (MCO). Quay.io serves most of the images that make up OpenShift Container Platform, and only the release image is signed. Release images refer to the approved OpenShift Container Platform images, offering a degree of protection against supply chain attacks. However, some extensions to OpenShift Container Platform, such as logging, monitoring, and service mesh, are shipped as Operators from the Operator Lifecycle Manager (OLM). Those images ship from the Red Hat Ecosystem Catalog Container images registry. To verify the integrity of those images between Red Hat registries and your infrastructure, enable signature verification. 2.4.1. Enabling signature verification for Red Hat Container Registries Enabling container signature validation for Red Hat Container Registries requires writing a signature verification policy file specifying the keys to verify images from these registries. For RHEL8 nodes, the registries are already defined in /etc/containers/registries.d by default. Procedure Create a Butane config file, 51-worker-rh-registry-trust.bu , containing the necessary configuration for the worker nodes. Note See "Creating machine configs with Butane" for information about Butane. variant: openshift version: 4.14.0 metadata: name: 51-worker-rh-registry-trust labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/containers/policy.json mode: 0644 overwrite: true contents: inline: | { "default": [ { "type": "insecureAcceptAnything" } ], "transports": { "docker": { "registry.access.redhat.com": [ { "type": "signedBy", "keyType": "GPGKeys", "keyPath": "/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release" } ], "registry.redhat.io": [ { "type": "signedBy", "keyType": "GPGKeys", "keyPath": "/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release" } ] }, "docker-daemon": { "": [ { "type": "insecureAcceptAnything" } ] } } } Use Butane to generate a machine config YAML file, 51-worker-rh-registry-trust.yaml , containing the file to be written to disk on the worker nodes: USD butane 51-worker-rh-registry-trust.bu -o 51-worker-rh-registry-trust.yaml Apply the created machine config: USD oc apply -f 51-worker-rh-registry-trust.yaml Check that the worker machine config pool has rolled out with the new machine config: Check that the new machine config was created: USD oc get mc Sample output NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 00-worker a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-master-container-runtime a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-master-kubelet a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-worker-container-runtime a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-worker-kubelet a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 51-master-rh-registry-trust 3.2.0 13s 51-worker-rh-registry-trust 3.2.0 53s 1 99-master-generated-crio-seccomp-use-default 3.2.0 25m 99-master-generated-registries a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 99-master-ssh 3.2.0 28m 99-worker-generated-crio-seccomp-use-default 3.2.0 25m 99-worker-generated-registries a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 99-worker-ssh 3.2.0 28m rendered-master-af1e7ff78da0a9c851bab4be2777773b a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 8s rendered-master-cd51fd0c47e91812bfef2765c52ec7e6 a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 24m rendered-worker-2b52f75684fbc711bd1652dd86fd0b82 a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 24m rendered-worker-be3b3bce4f4aa52a62902304bac9da3c a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 48s 2 1 New machine config 2 New rendered machine config Check that the worker machine config pool is updating with the new machine config: USD oc get mcp Sample output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-af1e7ff78da0a9c851bab4be2777773b True False False 3 3 3 0 30m worker rendered-worker-be3b3bce4f4aa52a62902304bac9da3c False True False 3 0 0 0 30m 1 1 When the UPDATING field is True , the machine config pool is updating with the new machine config. When the field becomes False , the worker machine config pool has rolled out to the new machine config. If your cluster uses any RHEL7 worker nodes, when the worker machine config pool is updated, create YAML files on those nodes in the /etc/containers/registries.d directory, which specify the location of the detached signatures for a given registry server. The following example works only for images hosted in registry.access.redhat.com and registry.redhat.io . Start a debug session to each RHEL7 worker node: USD oc debug node/<node_name> Change your root directory to /host : sh-4.2# chroot /host Create a /etc/containers/registries.d/registry.redhat.io.yaml file that contains the following: docker: registry.redhat.io: sigstore: https://registry.redhat.io/containers/sigstore Create a /etc/containers/registries.d/registry.access.redhat.com.yaml file that contains the following: docker: registry.access.redhat.com: sigstore: https://access.redhat.com/webassets/docker/content/sigstore Exit the debug session. 2.4.2. Verifying the signature verification configuration After you apply the machine configs to the cluster, the Machine Config Controller detects the new MachineConfig object and generates a new rendered-worker-<hash> version. Prerequisites You enabled signature verification by using a machine config file. Procedure On the command line, run the following command to display information about a desired worker: USD oc describe machineconfigpool/worker Example output of initial worker monitoring Name: worker Namespace: Labels: machineconfiguration.openshift.io/mco-built-in= Annotations: <none> API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfigPool Metadata: Creation Timestamp: 2019-12-19T02:02:12Z Generation: 3 Resource Version: 16229 Self Link: /apis/machineconfiguration.openshift.io/v1/machineconfigpools/worker UID: 92697796-2203-11ea-b48c-fa163e3940e5 Spec: Configuration: Name: rendered-worker-f6819366eb455a401c42f8d96ab25c02 Source: API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 00-worker API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-container-runtime API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-kubelet API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 51-worker-rh-registry-trust API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-92697796-2203-11ea-b48c-fa163e3940e5-registries API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-ssh Machine Config Selector: Match Labels: machineconfiguration.openshift.io/role: worker Node Selector: Match Labels: node-role.kubernetes.io/worker: Paused: false Status: Conditions: Last Transition Time: 2019-12-19T02:03:27Z Message: Reason: Status: False Type: RenderDegraded Last Transition Time: 2019-12-19T02:03:43Z Message: Reason: Status: False Type: NodeDegraded Last Transition Time: 2019-12-19T02:03:43Z Message: Reason: Status: False Type: Degraded Last Transition Time: 2019-12-19T02:28:23Z Message: Reason: Status: False Type: Updated Last Transition Time: 2019-12-19T02:28:23Z Message: All nodes are updating to rendered-worker-f6819366eb455a401c42f8d96ab25c02 Reason: Status: True Type: Updating Configuration: Name: rendered-worker-d9b3f4ffcfd65c30dcf591a0e8cf9b2e Source: API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 00-worker API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-container-runtime API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-kubelet API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-92697796-2203-11ea-b48c-fa163e3940e5-registries API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-ssh Degraded Machine Count: 0 Machine Count: 1 Observed Generation: 3 Ready Machine Count: 0 Unavailable Machine Count: 1 Updated Machine Count: 0 Events: <none> Run the oc describe command again: USD oc describe machineconfigpool/worker Example output after the worker is updated ... Last Transition Time: 2019-12-19T04:53:09Z Message: All nodes are updated with rendered-worker-f6819366eb455a401c42f8d96ab25c02 Reason: Status: True Type: Updated Last Transition Time: 2019-12-19T04:53:09Z Message: Reason: Status: False Type: Updating Configuration: Name: rendered-worker-f6819366eb455a401c42f8d96ab25c02 Source: API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 00-worker API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-container-runtime API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-kubelet API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 51-worker-rh-registry-trust API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-92697796-2203-11ea-b48c-fa163e3940e5-registries API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-ssh Degraded Machine Count: 0 Machine Count: 3 Observed Generation: 4 Ready Machine Count: 3 Unavailable Machine Count: 0 Updated Machine Count: 3 ... Note The Observed Generation parameter shows an increased count based on the generation of the controller-produced configuration. This controller updates this value even if it fails to process the specification and generate a revision. The Configuration Source value points to the 51-worker-rh-registry-trust configuration. Confirm that the policy.json file exists with the following command: USD oc debug node/<node> -- chroot /host cat /etc/containers/policy.json Example output Starting pod/<node>-debug ... To use host binaries, run `chroot /host` { "default": [ { "type": "insecureAcceptAnything" } ], "transports": { "docker": { "registry.access.redhat.com": [ { "type": "signedBy", "keyType": "GPGKeys", "keyPath": "/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release" } ], "registry.redhat.io": [ { "type": "signedBy", "keyType": "GPGKeys", "keyPath": "/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release" } ] }, "docker-daemon": { "": [ { "type": "insecureAcceptAnything" } ] } } } Confirm that the registry.redhat.io.yaml file exists with the following command: USD oc debug node/<node> -- chroot /host cat /etc/containers/registries.d/registry.redhat.io.yaml Example output Starting pod/<node>-debug ... To use host binaries, run `chroot /host` docker: registry.redhat.io: sigstore: https://registry.redhat.io/containers/sigstore Confirm that the registry.access.redhat.com.yaml file exists with the following command: USD oc debug node/<node> -- chroot /host cat /etc/containers/registries.d/registry.access.redhat.com.yaml Example output Starting pod/<node>-debug ... To use host binaries, run `chroot /host` docker: registry.access.redhat.com: sigstore: https://access.redhat.com/webassets/docker/content/sigstore 2.4.3. Understanding the verification of container images lacking verifiable signatures Each OpenShift Container Platform release image is immutable and signed with a Red Hat production key. During an OpenShift Container Platform update or installation, a release image might deploy container images that do not have verifiable signatures. Each signed release image digest is immutable. Each reference in the release image is to the immutable digest of another image, so the contents can be trusted transitively. In other words, the signature on the release image validates all release contents. For example, the image references lacking a verifiable signature are contained in the signed OpenShift Container Platform release image: Example release info output USD oc adm release info quay.io/openshift-release-dev/ocp-release@sha256:2309578b68c5666dad62aed696f1f9d778ae1a089ee461060ba7b9514b7ca417 -o pullspec 1 quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9aafb914d5d7d0dec4edd800d02f811d7383a7d49e500af548eab5d00c1bffdb 2 1 Signed release image SHA. 2 Container image lacking a verifiable signature included in the release. 2.4.3.1. Automated verification during updates Verification of signatures is automatic. The OpenShift Cluster Version Operator (CVO) verifies signatures on the release images during an OpenShift Container Platform update. This is an internal process. An OpenShift Container Platform installation or update fails if the automated verification fails. Verification of signatures can also be done manually using the skopeo command-line utility. Additional resources Introduction to OpenShift Updates 2.4.3.2. Using skopeo to verify signatures of Red Hat container images You can verify the signatures for container images included in an OpenShift Container Platform release image by pulling those signatures from OCP release mirror site . Because the signatures on the mirror site are not in a format readily understood by Podman or CRI-O, you can use the skopeo standalone-verify command to verify that the your release images are signed by Red Hat. Prerequisites You have installed the skopeo command-line utility. Procedure Get the full SHA for your release by running the following command: USD oc adm release info <release_version> \ 1 1 Substitute <release_version> with your release number, for example, 4.14.3 . Example output snippet --- Pull From: quay.io/openshift-release-dev/ocp-release@sha256:e73ab4b33a9c3ff00c9f800a38d69853ca0c4dfa5a88e3df331f66df8f18ec55 --- Pull down the Red Hat release key by running the following command: USD curl -o pub.key https://access.redhat.com/security/data/fd431d51.txt Get the signature file for the specific release that you want to verify by running the following command: USD curl -o signature-1 https://mirror.openshift.com/pub/openshift-v4/signatures/openshift-release-dev/ocp-release/sha256%<sha_from_version>/signature-1 \ 1 1 Replace <sha_from_version> with SHA value from the full link to the mirror site that matches the SHA of your release. For example, the link to the signature for the 4.12.23 release is https://mirror.openshift.com/pub/openshift-v4/signatures/openshift-release-dev/ocp-release/sha256%e73ab4b33a9c3ff00c9f800a38d69853ca0c4dfa5a88e3df331f66df8f18ec55/signature-1 , and the SHA value is e73ab4b33a9c3ff00c9f800a38d69853ca0c4dfa5a88e3df331f66df8f18ec55 . Get the manifest for the release image by running the following command: USD skopeo inspect --raw docker://<quay_link_to_release> > manifest.json \ 1 1 Replace <quay_link_to_release> with the output of the oc adm release info command. For example, quay.io/openshift-release-dev/ocp-release@sha256:e73ab4b33a9c3ff00c9f800a38d69853ca0c4dfa5a88e3df331f66df8f18ec55 . Use skopeo to verify the signature: USD skopeo standalone-verify manifest.json quay.io/openshift-release-dev/ocp-release:<release_number>-<arch> any signature-1 --public-key-file pub.key where: <release_number> Specifies the release number, for example 4.14.3 . <arch> Specifies the architecture, for example x86_64 . Example output Signature verified using fingerprint 567E347AD0044ADE55BA8A5F199E2F91FD431D51, digest sha256:e73ab4b33a9c3ff00c9f800a38d69853ca0c4dfa5a88e3df331f66df8f18ec55 2.4.4. Additional resources Machine Config Overview 2.5. Understanding compliance For many OpenShift Container Platform customers, regulatory readiness, or compliance, on some level is required before any systems can be put into production. That regulatory readiness can be imposed by national standards, industry standards or the organization's corporate governance framework. 2.5.1. Understanding compliance and risk management FIPS compliance is one of the most critical components required in highly secure environments, to ensure that only supported cryptographic technologies are allowed on nodes. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. To understand Red Hat's view of OpenShift Container Platform compliance frameworks, refer to the Risk Management and Regulatory Readiness chapter of the OpenShift Security Guide Book . Additional resources Installing a cluster in FIPS mode 2.6. Securing container content To ensure the security of the content inside your containers you need to start with trusted base images, such as Red Hat Universal Base Images, and add trusted software. To check the ongoing security of your container images, there are both Red Hat and third-party tools for scanning images. 2.6.1. Securing inside the container Applications and infrastructures are composed of readily available components, many of which are open source packages such as, the Linux operating system, JBoss Web Server, PostgreSQL, and Node.js. Containerized versions of these packages are also available. However, you need to know where the packages originally came from, what versions are used, who built them, and whether there is any malicious code inside them. Some questions to answer include: Will what is inside the containers compromise your infrastructure? Are there known vulnerabilities in the application layer? Are the runtime and operating system layers current? By building your containers from Red Hat Universal Base Images (UBI) you are assured of a foundation for your container images that consists of the same RPM-packaged software that is included in Red Hat Enterprise Linux. No subscriptions are required to either use or redistribute UBI images. To assure ongoing security of the containers themselves, security scanning features, used directly from RHEL or added to OpenShift Container Platform, can alert you when an image you are using has vulnerabilities. OpenSCAP image scanning is available in RHEL and the Red Hat Quay Container Security Operator can be added to check container images used in OpenShift Container Platform. 2.6.2. Creating redistributable images with UBI To create containerized applications, you typically start with a trusted base image that offers the components that are usually provided by the operating system. These include the libraries, utilities, and other features the application expects to see in the operating system's file system. Red Hat Universal Base Images (UBI) were created to encourage anyone building their own containers to start with one that is made entirely from Red Hat Enterprise Linux rpm packages and other content. These UBI images are updated regularly to keep up with security patches and free to use and redistribute with container images built to include your own software. Search the Red Hat Ecosystem Catalog to both find and check the health of different UBI images. As someone creating secure container images, you might be interested in these two general types of UBI images: UBI : There are standard UBI images for RHEL 7, 8, and 9 ( ubi7/ubi , ubi8/ubi , and ubi9/ubi ), as well as minimal images based on those systems ( ubi7/ubi-minimal , ubi8/ubi-mimimal , and ubi9/ubi-minimal). All of these images are preconfigured to point to free repositories of RHEL software that you can add to the container images you build, using standard yum and dnf commands. Red Hat encourages people to use these images on other distributions, such as Fedora and Ubuntu. Red Hat Software Collections : Search the Red Hat Ecosystem Catalog for rhscl/ to find images created to use as base images for specific types of applications. For example, there are Apache httpd ( rhscl/httpd-* ), Python ( rhscl/python-* ), Ruby ( rhscl/ruby-* ), Node.js ( rhscl/nodejs-* ) and Perl ( rhscl/perl-* ) rhscl images. Keep in mind that while UBI images are freely available and redistributable, Red Hat support for these images is only available through Red Hat product subscriptions. See Using Red Hat Universal Base Images in the Red Hat Enterprise Linux documentation for information on how to use and build on standard, minimal and init UBI images. 2.6.3. Security scanning in RHEL For Red Hat Enterprise Linux (RHEL) systems, OpenSCAP scanning is available from the openscap-utils package. In RHEL, you can use the openscap-podman command to scan images for vulnerabilities. See Scanning containers and container images for vulnerabilities in the Red Hat Enterprise Linux documentation. OpenShift Container Platform enables you to leverage RHEL scanners with your CI/CD process. For example, you can integrate static code analysis tools that test for security flaws in your source code and software composition analysis tools that identify open source libraries to provide metadata on those libraries such as known vulnerabilities. 2.6.3.1. Scanning OpenShift images For the container images that are running in OpenShift Container Platform and are pulled from Red Hat Quay registries, you can use an Operator to list the vulnerabilities of those images. The Red Hat Quay Container Security Operator can be added to OpenShift Container Platform to provide vulnerability reporting for images added to selected namespaces. Container image scanning for Red Hat Quay is performed by the Clair . In Red Hat Quay, Clair can search for and report vulnerabilities in images built from RHEL, CentOS, Oracle, Alpine, Debian, and Ubuntu operating system software. 2.6.4. Integrating external scanning OpenShift Container Platform makes use of object annotations to extend functionality. External tools, such as vulnerability scanners, can annotate image objects with metadata to summarize results and control pod execution. This section describes the recognized format of this annotation so it can be reliably used in consoles to display useful data to users. 2.6.4.1. Image metadata There are different types of image quality data, including package vulnerabilities and open source software (OSS) license compliance. Additionally, there may be more than one provider of this metadata. To that end, the following annotation format has been reserved: Table 2.1. Annotation key format Component Description Acceptable values qualityType Metadata type vulnerability license operations policy providerId Provider ID string openscap redhatcatalog redhatinsights blackduck jfrog 2.6.4.1.1. Example annotation keys The value of the image quality annotation is structured data that must adhere to the following format: Table 2.2. Annotation value format Field Required? Description Type name Yes Provider display name String timestamp Yes Scan timestamp String description No Short description String reference Yes URL of information source or more details. Required so user may validate the data. String scannerVersion No Scanner version String compliant No Compliance pass or fail Boolean summary No Summary of issues found List (see table below) The summary field must adhere to the following format: Table 2.3. Summary field value format Field Description Type label Display label for component (for example, "critical," "important," "moderate," "low," or "health") String data Data for this component (for example, count of vulnerabilities found or score) String severityIndex Component index allowing for ordering and assigning graphical representation. The value is range 0..3 where 0 = low. Integer reference URL of information source or more details. Optional. String 2.6.4.1.2. Example annotation values This example shows an OpenSCAP annotation for an image with vulnerability summary data and a compliance boolean: OpenSCAP annotation { "name": "OpenSCAP", "description": "OpenSCAP vulnerability score", "timestamp": "2016-09-08T05:04:46Z", "reference": "https://www.open-scap.org/930492", "compliant": true, "scannerVersion": "1.2", "summary": [ { "label": "critical", "data": "4", "severityIndex": 3, "reference": null }, { "label": "important", "data": "12", "severityIndex": 2, "reference": null }, { "label": "moderate", "data": "8", "severityIndex": 1, "reference": null }, { "label": "low", "data": "26", "severityIndex": 0, "reference": null } ] } This example shows the Container images section of the Red Hat Ecosystem Catalog annotation for an image with health index data with an external URL for additional details: Red Hat Ecosystem Catalog annotation { "name": "Red Hat Ecosystem Catalog", "description": "Container health index", "timestamp": "2016-09-08T05:04:46Z", "reference": "https://access.redhat.com/errata/RHBA-2016:1566", "compliant": null, "scannerVersion": "1.2", "summary": [ { "label": "Health index", "data": "B", "severityIndex": 1, "reference": null } ] } 2.6.4.2. Annotating image objects While image stream objects are what an end user of OpenShift Container Platform operates against, image objects are annotated with security metadata. Image objects are cluster-scoped, pointing to a single image that may be referenced by many image streams and tags. 2.6.4.2.1. Example annotate CLI command Replace <image> with an image digest, for example sha256:401e359e0f45bfdcf004e258b72e253fd07fba8cc5c6f2ed4f4608fb119ecc2 : USD oc annotate image <image> \ quality.images.openshift.io/vulnerability.redhatcatalog='{ \ "name": "Red Hat Ecosystem Catalog", \ "description": "Container health index", \ "timestamp": "2020-06-01T05:04:46Z", \ "compliant": null, \ "scannerVersion": "1.2", \ "reference": "https://access.redhat.com/errata/RHBA-2020:2347", \ "summary": "[ \ { "label": "Health index", "data": "B", "severityIndex": 1, "reference": null } ]" }' 2.6.4.3. Controlling pod execution Use the images.openshift.io/deny-execution image policy to programmatically control if an image can be run. 2.6.4.3.1. Example annotation annotations: images.openshift.io/deny-execution: true 2.6.4.4. Integration reference In most cases, external tools such as vulnerability scanners develop a script or plugin that watches for image updates, performs scanning, and annotates the associated image object with the results. Typically this automation calls the OpenShift Container Platform 4.14 REST APIs to write the annotation. See OpenShift Container Platform REST APIs for general information on the REST APIs. 2.6.4.4.1. Example REST API call The following example call using curl overrides the value of the annotation. Be sure to replace the values for <token> , <openshift_server> , <image_id> , and <image_annotation> . Patch API call USD curl -X PATCH \ -H "Authorization: Bearer <token>" \ -H "Content-Type: application/merge-patch+json" \ https://<openshift_server>:6443/apis/image.openshift.io/v1/images/<image_id> \ --data '{ <image_annotation> }' The following is an example of PATCH payload data: Patch call data { "metadata": { "annotations": { "quality.images.openshift.io/vulnerability.redhatcatalog": "{ 'name': 'Red Hat Ecosystem Catalog', 'description': 'Container health index', 'timestamp': '2020-06-01T05:04:46Z', 'compliant': null, 'reference': 'https://access.redhat.com/errata/RHBA-2020:2347', 'summary': [{'label': 'Health index', 'data': '4', 'severityIndex': 1, 'reference': null}] }" } } } Additional resources Image stream objects 2.7. Using container registries securely Container registries store container images to: Make images accessible to others Organize images into repositories that can include multiple versions of an image Optionally limit access to images, based on different authentication methods, or make them publicly available There are public container registries, such as Quay.io and Docker Hub where many people and organizations share their images. The Red Hat Registry offers supported Red Hat and partner images, while the Red Hat Ecosystem Catalog offers detailed descriptions and health checks for those images. To manage your own registry, you could purchase a container registry such as Red Hat Quay . From a security standpoint, some registries provide special features to check and improve the health of your containers. For example, Red Hat Quay offers container vulnerability scanning with Clair security scanner, build triggers to automatically rebuild images when source code changes in GitHub and other locations, and the ability to use role-based access control (RBAC) to secure access to images. 2.7.1. Knowing where containers come from? There are tools you can use to scan and track the contents of your downloaded and deployed container images. However, there are many public sources of container images. When using public container registries, you can add a layer of protection by using trusted sources. 2.7.2. Immutable and certified containers Consuming security updates is particularly important when managing immutable containers . Immutable containers are containers that will never be changed while running. When you deploy immutable containers, you do not step into the running container to replace one or more binaries. From an operational standpoint, you rebuild and redeploy an updated container image to replace a container instead of changing it. Red Hat certified images are: Free of known vulnerabilities in the platform components or layers Compatible across the RHEL platforms, from bare metal to cloud Supported by Red Hat The list of known vulnerabilities is constantly evolving, so you must track the contents of your deployed container images, as well as newly downloaded images, over time. You can use Red Hat Security Advisories (RHSAs) to alert you to any newly discovered issues in Red Hat certified container images, and direct you to the updated image. Alternatively, you can go to the Red Hat Ecosystem Catalog to look up that and other security-related issues for each Red Hat image. 2.7.3. Getting containers from Red Hat Registry and Ecosystem Catalog Red Hat lists certified container images for Red Hat products and partner offerings from the Container Images section of the Red Hat Ecosystem Catalog. From that catalog, you can see details of each image, including CVE, software packages listings, and health scores. Red Hat images are actually stored in what is referred to as the Red Hat Registry , which is represented by a public container registry ( registry.access.redhat.com ) and an authenticated registry ( registry.redhat.io ). Both include basically the same set of container images, with registry.redhat.io including some additional images that require authentication with Red Hat subscription credentials. Container content is monitored for vulnerabilities by Red Hat and updated regularly. When Red Hat releases security updates, such as fixes to glibc , DROWN , or Dirty Cow , any affected container images are also rebuilt and pushed to the Red Hat Registry. Red Hat uses a health index to reflect the security risk for each container provided through the Red Hat Ecosystem Catalog. Because containers consume software provided by Red Hat and the errata process, old, stale containers are insecure whereas new, fresh containers are more secure. To illustrate the age of containers, the Red Hat Ecosystem Catalog uses a grading system. A freshness grade is a measure of the oldest and most severe security errata available for an image. "A" is more up to date than "F". See Container Health Index grades as used inside the Red Hat Ecosystem Catalog for more details on this grading system. See the Red Hat Product Security Center for details on security updates and vulnerabilities related to Red Hat software. Check out Red Hat Security Advisories to search for specific advisories and CVEs. 2.7.4. OpenShift Container Registry OpenShift Container Platform includes the OpenShift Container Registry , a private registry running as an integrated component of the platform that you can use to manage your container images. The OpenShift Container Registry provides role-based access controls that allow you to manage who can pull and push which container images. OpenShift Container Platform also supports integration with other private registries that you might already be using, such as Red Hat Quay. Additional resources Integrated OpenShift image registry 2.7.5. Storing containers using Red Hat Quay Red Hat Quay is an enterprise-quality container registry product from Red Hat. Development for Red Hat Quay is done through the upstream Project Quay . Red Hat Quay is available to deploy on-premise or through the hosted version of Red Hat Quay at Quay.io . Security-related features of Red Hat Quay include: Time machine : Allows images with older tags to expire after a set period of time or based on a user-selected expiration time. Repository mirroring : Lets you mirror other registries for security reasons, such hosting a public repository on Red Hat Quay behind a company firewall, or for performance reasons, to keep registries closer to where they are used. Action log storage : Save Red Hat Quay logging output to Elasticsearch storage or Splunk to allow for later search and analysis. Clair : Scan images against a variety of Linux vulnerability databases, based on the origins of each container image. Internal authentication : Use the default local database to handle RBAC authentication to Red Hat Quay or choose from LDAP, Keystone (OpenStack), JWT Custom Authentication, or External Application Token authentication. External authorization (OAuth) : Allow authorization to Red Hat Quay from GitHub, GitHub Enterprise, or Google Authentication. Access settings : Generate tokens to allow access to Red Hat Quay from docker, rkt, anonymous access, user-created accounts, encrypted client passwords, or prefix username autocompletion. Ongoing integration of Red Hat Quay with OpenShift Container Platform continues, with several OpenShift Container Platform Operators of particular interest. The Quay Bridge Operator lets you replace the internal OpenShift image registry with Red Hat Quay. The Red Hat Quay Container Security Operator lets you check vulnerabilities of images running in OpenShift Container Platform that were pulled from Red Hat Quay registries. 2.8. Securing the build process In a container environment, the software build process is the stage in the life cycle where application code is integrated with the required runtime libraries. Managing this build process is key to securing the software stack. 2.8.1. Building once, deploying everywhere Using OpenShift Container Platform as the standard platform for container builds enables you to guarantee the security of the build environment. Adhering to a "build once, deploy everywhere" philosophy ensures that the product of the build process is exactly what is deployed in production. It is also important to maintain the immutability of your containers. You should not patch running containers, but rebuild and redeploy them. As your software moves through the stages of building, testing, and production, it is important that the tools making up your software supply chain be trusted. The following figure illustrates the process and tools that could be incorporated into a trusted software supply chain for containerized software: OpenShift Container Platform can be integrated with trusted code repositories (such as GitHub) and development platforms (such as Che) for creating and managing secure code. Unit testing could rely on Cucumber and JUnit . You could inspect your containers for vulnerabilities and compliance issues with Anchore or Twistlock, and use image scanning tools such as AtomicScan or Clair. Tools such as Sysdig could provide ongoing monitoring of your containerized applications. 2.8.2. Managing builds You can use Source-to-Image (S2I) to combine source code and base images. Builder images make use of S2I to enable your development and operations teams to collaborate on a reproducible build environment. With Red Hat S2I images available as Universal Base Image (UBI) images, you can now freely redistribute your software with base images built from real RHEL RPM packages. Red Hat has removed subscription restrictions to allow this. When developers commit code with Git for an application using build images, OpenShift Container Platform can perform the following functions: Trigger, either by using webhooks on the code repository or other automated continuous integration (CI) process, to automatically assemble a new image from available artifacts, the S2I builder image, and the newly committed code. Automatically deploy the newly built image for testing. Promote the tested image to production where it can be automatically deployed using a CI process. You can use the integrated OpenShift Container Registry to manage access to final images. Both S2I and native build images are automatically pushed to your OpenShift Container Registry. In addition to the included Jenkins for CI, you can also integrate your own build and CI environment with OpenShift Container Platform using RESTful APIs, as well as use any API-compliant image registry. 2.8.3. Securing inputs during builds In some scenarios, build operations require credentials to access dependent resources, but it is undesirable for those credentials to be available in the final application image produced by the build. You can define input secrets for this purpose. For example, when building a Node.js application, you can set up your private mirror for Node.js modules. To download modules from that private mirror, you must supply a custom .npmrc file for the build that contains a URL, user name, and password. For security reasons, you do not want to expose your credentials in the application image. Using this example scenario, you can add an input secret to a new BuildConfig object: Create the secret, if it does not exist: USD oc create secret generic secret-npmrc --from-file=.npmrc=~/.npmrc This creates a new secret named secret-npmrc , which contains the base64 encoded content of the ~/.npmrc file. Add the secret to the source section in the existing BuildConfig object: source: git: uri: https://github.com/sclorg/nodejs-ex.git secrets: - destinationDir: . secret: name: secret-npmrc To include the secret in a new BuildConfig object, run the following command: USD oc new-build \ openshift/nodejs-010-centos7~https://github.com/sclorg/nodejs-ex.git \ --build-secret secret-npmrc 2.8.4. Designing your build process You can design your container image management and build process to use container layers so that you can separate control. For example, an operations team manages base images, while architects manage middleware, runtimes, databases, and other solutions. Developers can then focus on application layers and focus on writing code. Because new vulnerabilities are identified daily, you need to proactively check container content over time. To do this, you should integrate automated security testing into your build or CI process. For example: SAST / DAST - Static and Dynamic security testing tools. Scanners for real-time checking against known vulnerabilities. Tools like these catalog the open source packages in your container, notify you of any known vulnerabilities, and update you when new vulnerabilities are discovered in previously scanned packages. Your CI process should include policies that flag builds with issues discovered by security scans so that your team can take appropriate action to address those issues. You should sign your custom built containers to ensure that nothing is tampered with between build and deployment. Using GitOps methodology, you can use the same CI/CD mechanisms to manage not only your application configurations, but also your OpenShift Container Platform infrastructure. 2.8.5. Building Knative serverless applications Relying on Kubernetes and Kourier, you can build, deploy, and manage serverless applications by using OpenShift Serverless in OpenShift Container Platform. As with other builds, you can use S2I images to build your containers, then serve them using Knative services. View Knative application builds through the Topology view of the OpenShift Container Platform web console. 2.8.6. Additional resources Understanding image builds Triggering and modifying builds Creating build inputs Input secrets and config maps OpenShift Serverless overview Viewing application composition using the Topology view 2.9. Deploying containers You can use a variety of techniques to make sure that the containers you deploy hold the latest production-quality content and that they have not been tampered with. These techniques include setting up build triggers to incorporate the latest code and using signatures to ensure that the container comes from a trusted source and has not been modified. 2.9.1. Controlling container deployments with triggers If something happens during the build process, or if a vulnerability is discovered after an image has been deployed, you can use tooling for automated, policy-based deployment to remediate. You can use triggers to rebuild and replace images, ensuring the immutable containers process, instead of patching running containers, which is not recommended. For example, you build an application using three container image layers: core, middleware, and applications. An issue is discovered in the core image and that image is rebuilt. After the build is complete, the image is pushed to your OpenShift Container Registry. OpenShift Container Platform detects that the image has changed and automatically rebuilds and deploys the application image, based on the defined triggers. This change incorporates the fixed libraries and ensures that the production code is identical to the most current image. You can use the oc set triggers command to set a deployment trigger. For example, to set a trigger for a deployment called deployment-example: USD oc set triggers deploy/deployment-example \ --from-image=example:latest \ --containers=web 2.9.2. Controlling what image sources can be deployed It is important that the intended images are actually being deployed, that the images including the contained content are from trusted sources, and they have not been altered. Cryptographic signing provides this assurance. OpenShift Container Platform enables cluster administrators to apply security policy that is broad or narrow, reflecting deployment environment and security requirements. Two parameters define this policy: one or more registries, with optional project namespace trust type, such as accept, reject, or require public key(s) You can use these policy parameters to allow, deny, or require a trust relationship for entire registries, parts of registries, or individual images. Using trusted public keys, you can ensure that the source is cryptographically verified. The policy rules apply to nodes. Policy may be applied uniformly across all nodes or targeted for different node workloads (for example, build, zone, or environment). Example image signature policy file { "default": [{"type": "reject"}], "transports": { "docker": { "access.redhat.com": [ { "type": "signedBy", "keyType": "GPGKeys", "keyPath": "/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release" } ] }, "atomic": { "172.30.1.1:5000/openshift": [ { "type": "signedBy", "keyType": "GPGKeys", "keyPath": "/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release" } ], "172.30.1.1:5000/production": [ { "type": "signedBy", "keyType": "GPGKeys", "keyPath": "/etc/pki/example.com/pubkey" } ], "172.30.1.1:5000": [{"type": "reject"}] } } } The policy can be saved onto a node as /etc/containers/policy.json . Saving this file to a node is best accomplished using a new MachineConfig object. This example enforces the following rules: Require images from the Red Hat Registry ( registry.access.redhat.com ) to be signed by the Red Hat public key. Require images from your OpenShift Container Registry in the openshift namespace to be signed by the Red Hat public key. Require images from your OpenShift Container Registry in the production namespace to be signed by the public key for example.com . Reject all other registries not specified by the global default definition. 2.9.3. Using signature transports A signature transport is a way to store and retrieve the binary signature blob. There are two types of signature transports. atomic : Managed by the OpenShift Container Platform API. docker : Served as a local file or by a web server. The OpenShift Container Platform API manages signatures that use the atomic transport type. You must store the images that use this signature type in your OpenShift Container Registry. Because the docker/distribution extensions API auto-discovers the image signature endpoint, no additional configuration is required. Signatures that use the docker transport type are served by local file or web server. These signatures are more flexible; you can serve images from any container image registry and use an independent server to deliver binary signatures. However, the docker transport type requires additional configuration. You must configure the nodes with the URI of the signature server by placing arbitrarily-named YAML files into a directory on the host system, /etc/containers/registries.d by default. The YAML configuration files contain a registry URI and a signature server URI, or sigstore : Example registries.d file docker: access.redhat.com: sigstore: https://access.redhat.com/webassets/docker/content/sigstore In this example, the Red Hat Registry, access.redhat.com , is the signature server that provides signatures for the docker transport type. Its URI is defined in the sigstore parameter. You might name this file /etc/containers/registries.d/redhat.com.yaml and use the Machine Config Operator to automatically place the file on each node in your cluster. No service restart is required since policy and registries.d files are dynamically loaded by the container runtime. 2.9.4. Creating secrets and config maps The Secret object type provides a mechanism to hold sensitive information such as passwords, OpenShift Container Platform client configuration files, dockercfg files, and private source repository credentials. Secrets decouple sensitive content from pods. You can mount secrets into containers using a volume plugin or the system can use secrets to perform actions on behalf of a pod. For example, to add a secret to your deployment configuration so that it can access a private image repository, do the following: Procedure Log in to the OpenShift Container Platform web console. Create a new project. Navigate to Resources Secrets and create a new secret. Set Secret Type to Image Secret and Authentication Type to Image Registry Credentials to enter credentials for accessing a private image repository. When creating a deployment configuration (for example, from the Add to Project Deploy Image page), set the Pull Secret to your new secret. Config maps are similar to secrets, but are designed to support working with strings that do not contain sensitive information. The ConfigMap object holds key-value pairs of configuration data that can be consumed in pods or used to store configuration data for system components such as controllers. 2.9.5. Automating continuous deployment You can integrate your own continuous deployment (CD) tooling with OpenShift Container Platform. By leveraging CI/CD and OpenShift Container Platform, you can automate the process of rebuilding the application to incorporate the latest fixes, testing, and ensuring that it is deployed everywhere within the environment. Additional resources Input secrets and config maps 2.10. Securing the container platform OpenShift Container Platform and Kubernetes APIs are key to automating container management at scale. APIs are used to: Validate and configure the data for pods, services, and replication controllers. Perform project validation on incoming requests and invoke triggers on other major system components. Security-related features in OpenShift Container Platform that are based on Kubernetes include: Multitenancy, which combines Role-Based Access Controls and network policies to isolate containers at multiple levels. Admission plugins, which form boundaries between an API and those making requests to the API. OpenShift Container Platform uses Operators to automate and simplify the management of Kubernetes-level security features. 2.10.1. Isolating containers with multitenancy Multitenancy allows applications on an OpenShift Container Platform cluster that are owned by multiple users, and run across multiple hosts and namespaces, to remain isolated from each other and from outside attacks. You obtain multitenancy by applying role-based access control (RBAC) to Kubernetes namespaces. In Kubernetes, namespaces are areas where applications can run in ways that are separate from other applications. OpenShift Container Platform uses and extends namespaces by adding extra annotations, including MCS labeling in SELinux, and identifying these extended namespaces as projects . Within the scope of a project, users can maintain their own cluster resources, including service accounts, policies, constraints, and various other objects. RBAC objects are assigned to projects to authorize selected users to have access to those projects. That authorization takes the form of rules, roles, and bindings: Rules define what a user can create or access in a project. Roles are collections of rules that you can bind to selected users or groups. Bindings define the association between users or groups and roles. Local RBAC roles and bindings attach a user or group to a particular project. Cluster RBAC can attach cluster-wide roles and bindings to all projects in a cluster. There are default cluster roles that can be assigned to provide admin , basic-user , cluster-admin , and cluster-status access. 2.10.2. Protecting control plane with admission plugins While RBAC controls access rules between users and groups and available projects, admission plugins define access to the OpenShift Container Platform master API. Admission plugins form a chain of rules that consist of: Default admissions plugins: These implement a default set of policies and resources limits that are applied to components of the OpenShift Container Platform control plane. Mutating admission plugins: These plugins dynamically extend the admission chain. They call out to a webhook server and can both authenticate a request and modify the selected resource. Validating admission plugins: These validate requests for a selected resource and can both validate the request and ensure that the resource does not change again. API requests go through admissions plugins in a chain, with any failure along the way causing the request to be rejected. Each admission plugin is associated with particular resources and only responds to requests for those resources. 2.10.2.1. Security context constraints (SCCs) You can use security context constraints (SCCs) to define a set of conditions that a pod must run with to be accepted into the system. Some aspects that can be managed by SCCs include: Running of privileged containers Capabilities a container can request to be added Use of host directories as volumes SELinux context of the container Container user ID If you have the required permissions, you can adjust the default SCC policies to be more permissive, if required. 2.10.2.2. Granting roles to service accounts You can assign roles to service accounts, in the same way that users are assigned role-based access. There are three default service accounts created for each project. A service account: is limited in scope to a particular project derives its name from its project is automatically assigned an API token and credentials to access the OpenShift Container Registry Service accounts associated with platform components automatically have their keys rotated. 2.10.3. Authentication and authorization 2.10.3.1. Controlling access using OAuth You can use API access control via authentication and authorization for securing your container platform. The OpenShift Container Platform master includes a built-in OAuth server. Users can obtain OAuth access tokens to authenticate themselves to the API. As an administrator, you can configure OAuth to authenticate using an identity provider , such as LDAP, GitHub, or Google. The identity provider is used by default for new OpenShift Container Platform deployments, but you can configure this at initial installation time or postinstallation. 2.10.3.2. API access control and management Applications can have multiple, independent API services which have different endpoints that require management. OpenShift Container Platform includes a containerized version of the 3scale API gateway so that you can manage your APIs and control access. 3scale gives you a variety of standard options for API authentication and security, which can be used alone or in combination to issue credentials and control access: standard API keys, application ID and key pair, and OAuth 2.0. You can restrict access to specific endpoints, methods, and services and apply access policy for groups of users. Application plans allow you to set rate limits for API usage and control traffic flow for groups of developers. For a tutorial on using APIcast v2, the containerized 3scale API Gateway, see Running APIcast on Red Hat OpenShift in the 3scale documentation. 2.10.3.3. Red Hat Single Sign-On The Red Hat Single Sign-On server enables you to secure your applications by providing web single sign-on capabilities based on standards, including SAML 2.0, OpenID Connect, and OAuth 2.0. The server can act as a SAML or OpenID Connect-based identity provider (IdP), mediating with your enterprise user directory or third-party identity provider for identity information and your applications using standards-based tokens. You can integrate Red Hat Single Sign-On with LDAP-based directory services including Microsoft Active Directory and Red Hat Enterprise Linux Identity Management. 2.10.3.4. Secure self-service web console OpenShift Container Platform provides a self-service web console to ensure that teams do not access other environments without authorization. OpenShift Container Platform ensures a secure multitenant master by providing the following: Access to the master uses Transport Layer Security (TLS) Access to the API Server uses X.509 certificates or OAuth access tokens Project quota limits the damage that a rogue token could do The etcd service is not exposed directly to the cluster 2.10.4. Managing certificates for the platform OpenShift Container Platform has multiple components within its framework that use REST-based HTTPS communication leveraging encryption via TLS certificates. OpenShift Container Platform's installer configures these certificates during installation. There are some primary components that generate this traffic: masters (API server and controllers) etcd nodes registry router 2.10.4.1. Configuring custom certificates You can configure custom serving certificates for the public hostnames of the API server and web console during initial installation or when redeploying certificates. You can also use a custom CA. Additional resources Introduction to OpenShift Container Platform Using RBAC to define and apply permissions About admission plugins Managing security context constraints SCC reference commands Examples of granting roles to service accounts Configuring the internal OAuth server Understanding identity provider configuration Certificate types and descriptions Proxy certificates 2.11. Securing networks Network security can be managed at several levels. At the pod level, network namespaces can prevent containers from seeing other pods or the host system by restricting network access. Network policies give you control over allowing and rejecting connections. You can manage ingress and egress traffic to and from your containerized applications. 2.11.1. Using network namespaces OpenShift Container Platform uses software-defined networking (SDN) to provide a unified cluster network that enables communication between containers across the cluster. Network policy mode, by default, makes all pods in a project accessible from other pods and network endpoints. To isolate one or more pods in a project, you can create NetworkPolicy objects in that project to indicate the allowed incoming connections. Using multitenant mode, you can provide project-level isolation for pods and services. 2.11.2. Isolating pods with network policies Using network policies , you can isolate pods from each other in the same project. Network policies can deny all network access to a pod, only allow connections for the Ingress Controller, reject connections from pods in other projects, or set similar rules for how networks behave. Additional resources About network policy 2.11.3. Using multiple pod networks Each running container has only one network interface by default. The Multus CNI plugin lets you create multiple CNI networks, and then attach any of those networks to your pods. In that way, you can do things like separate private data onto a more restricted network and have multiple network interfaces on each node. Additional resources Using multiple networks 2.11.4. Isolating applications OpenShift Container Platform enables you to segment network traffic on a single cluster to make multitenant clusters that isolate users, teams, applications, and environments from non-global resources. Additional resources Configuring network isolation using OpenShiftSDN 2.11.5. Securing ingress traffic There are many security implications related to how you configure access to your Kubernetes services from outside of your OpenShift Container Platform cluster. Besides exposing HTTP and HTTPS routes, ingress routing allows you to set up NodePort or LoadBalancer ingress types. NodePort exposes an application's service API object from each cluster worker. LoadBalancer lets you assign an external load balancer to an associated service API object in your OpenShift Container Platform cluster. Additional resources Configuring ingress cluster traffic 2.11.6. Securing egress traffic OpenShift Container Platform provides the ability to control egress traffic using either a router or firewall method. For example, you can use IP whitelisting to control database access. A cluster administrator can assign one or more egress IP addresses to a project in an OpenShift Container Platform SDN network provider. Likewise, a cluster administrator can prevent egress traffic from going outside of an OpenShift Container Platform cluster using an egress firewall. By assigning a fixed egress IP address, you can have all outgoing traffic assigned to that IP address for a particular project. With the egress firewall, you can prevent a pod from connecting to an external network, prevent a pod from connecting to an internal network, or limit a pod's access to specific internal subnets. Additional resources Configuring an egress firewall to control access to external IP addresses Configuring egress IPs for a project 2.12. Securing attached storage OpenShift Container Platform supports multiple types of storage, both for on-premise and cloud providers. In particular, OpenShift Container Platform can use storage types that support the Container Storage Interface. 2.12.1. Persistent volume plugins Containers are useful for both stateless and stateful applications. Protecting attached storage is a key element of securing stateful services. Using the Container Storage Interface (CSI), OpenShift Container Platform can incorporate storage from any storage back end that supports the CSI interface. OpenShift Container Platform provides plugins for multiple types of storage, including: Red Hat OpenShift Data Foundation * AWS Elastic Block Stores (EBS) * AWS Elastic File System (EFS) * Azure Disk * Azure File * OpenStack Cinder * GCE Persistent Disks * VMware vSphere * Network File System (NFS) FlexVolume Fibre Channel iSCSI Plugins for those storage types with dynamic provisioning are marked with an asterisk (*). Data in transit is encrypted via HTTPS for all OpenShift Container Platform components communicating with each other. You can mount a persistent volume (PV) on a host in any way supported by your storage type. Different types of storage have different capabilities and each PV's access modes are set to the specific modes supported by that particular volume. For example, NFS can support multiple read/write clients, but a specific NFS PV might be exported on the server as read-only. Each PV has its own set of access modes describing that specific PV's capabilities, such as ReadWriteOnce , ReadOnlyMany , and ReadWriteMany . 2.12.2. Shared storage For shared storage providers like NFS, the PV registers its group ID (GID) as an annotation on the PV resource. Then, when the PV is claimed by the pod, the annotated GID is added to the supplemental groups of the pod, giving that pod access to the contents of the shared storage. 2.12.3. Block storage For block storage providers like AWS Elastic Block Store (EBS), GCE Persistent Disks, and iSCSI, OpenShift Container Platform uses SELinux capabilities to secure the root of the mounted volume for non-privileged pods, making the mounted volume owned by and only visible to the container with which it is associated. Additional resources Understanding persistent storage Configuring CSI volumes Dynamic provisioning Persistent storage using NFS Persistent storage using AWS Elastic Block Store Persistent storage using GCE Persistent Disk 2.13. Monitoring cluster events and logs The ability to monitor and audit an OpenShift Container Platform cluster is an important part of safeguarding the cluster and its users against inappropriate usage. There are two main sources of cluster-level information that are useful for this purpose: events and logging. 2.13.1. Watching cluster events Cluster administrators are encouraged to familiarize themselves with the Event resource type and review the list of system events to determine which events are of interest. Events are associated with a namespace, either the namespace of the resource they are related to or, for cluster events, the default namespace. The default namespace holds relevant events for monitoring or auditing a cluster, such as node events and resource events related to infrastructure components. The master API and oc command do not provide parameters to scope a listing of events to only those related to nodes. A simple approach would be to use grep : USD oc get event -n default | grep Node Example output 1h 20h 3 origin-node-1.example.local Node Normal NodeHasDiskPressure ... A more flexible approach is to output the events in a form that other tools can process. For example, the following example uses the jq tool against JSON output to extract only NodeHasDiskPressure events: USD oc get events -n default -o json \ | jq '.items[] | select(.involvedObject.kind == "Node" and .reason == "NodeHasDiskPressure")' Example output { "apiVersion": "v1", "count": 3, "involvedObject": { "kind": "Node", "name": "origin-node-1.example.local", "uid": "origin-node-1.example.local" }, "kind": "Event", "reason": "NodeHasDiskPressure", ... } Events related to resource creation, modification, or deletion can also be good candidates for detecting misuse of the cluster. The following query, for example, can be used to look for excessive pulling of images: USD oc get events --all-namespaces -o json \ | jq '[.items[] | select(.involvedObject.kind == "Pod" and .reason == "Pulling")] | length' Example output 4 Note When a namespace is deleted, its events are deleted as well. Events can also expire and are deleted to prevent filling up etcd storage. Events are not stored as a permanent record and frequent polling is necessary to capture statistics over time. 2.13.2. Logging Using the oc log command, you can view container logs, build configs and deployments in real time. Different can users have access different access to logs: Users who have access to a project are able to see the logs for that project by default. Users with admin roles can access all container logs. To save your logs for further audit and analysis, you can enable the cluster-logging add-on feature to collect, manage, and view system, container, and audit logs. You can deploy, manage, and upgrade OpenShift Logging through the OpenShift Elasticsearch Operator and Red Hat OpenShift Logging Operator. 2.13.3. Audit logs With audit logs , you can follow a sequence of activities associated with how a user, administrator, or other OpenShift Container Platform component is behaving. API audit logging is done on each server. Additional resources List of system events Understanding OpenShift Logging Viewing audit logs
[ "variant: openshift version: 4.14.0 metadata: name: 51-worker-rh-registry-trust labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/containers/policy.json mode: 0644 overwrite: true contents: inline: | { \"default\": [ { \"type\": \"insecureAcceptAnything\" } ], \"transports\": { \"docker\": { \"registry.access.redhat.com\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release\" } ], \"registry.redhat.io\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release\" } ] }, \"docker-daemon\": { \"\": [ { \"type\": \"insecureAcceptAnything\" } ] } } }", "butane 51-worker-rh-registry-trust.bu -o 51-worker-rh-registry-trust.yaml", "oc apply -f 51-worker-rh-registry-trust.yaml", "oc get mc", "NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 00-worker a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-master-container-runtime a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-master-kubelet a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-worker-container-runtime a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-worker-kubelet a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 51-master-rh-registry-trust 3.2.0 13s 51-worker-rh-registry-trust 3.2.0 53s 1 99-master-generated-crio-seccomp-use-default 3.2.0 25m 99-master-generated-registries a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 99-master-ssh 3.2.0 28m 99-worker-generated-crio-seccomp-use-default 3.2.0 25m 99-worker-generated-registries a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 99-worker-ssh 3.2.0 28m rendered-master-af1e7ff78da0a9c851bab4be2777773b a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 8s rendered-master-cd51fd0c47e91812bfef2765c52ec7e6 a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 24m rendered-worker-2b52f75684fbc711bd1652dd86fd0b82 a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 24m rendered-worker-be3b3bce4f4aa52a62902304bac9da3c a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 48s 2", "oc get mcp", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-af1e7ff78da0a9c851bab4be2777773b True False False 3 3 3 0 30m worker rendered-worker-be3b3bce4f4aa52a62902304bac9da3c False True False 3 0 0 0 30m 1", "oc debug node/<node_name>", "sh-4.2# chroot /host", "docker: registry.redhat.io: sigstore: https://registry.redhat.io/containers/sigstore", "docker: registry.access.redhat.com: sigstore: https://access.redhat.com/webassets/docker/content/sigstore", "oc describe machineconfigpool/worker", "Name: worker Namespace: Labels: machineconfiguration.openshift.io/mco-built-in= Annotations: <none> API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfigPool Metadata: Creation Timestamp: 2019-12-19T02:02:12Z Generation: 3 Resource Version: 16229 Self Link: /apis/machineconfiguration.openshift.io/v1/machineconfigpools/worker UID: 92697796-2203-11ea-b48c-fa163e3940e5 Spec: Configuration: Name: rendered-worker-f6819366eb455a401c42f8d96ab25c02 Source: API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 00-worker API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-container-runtime API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-kubelet API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 51-worker-rh-registry-trust API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-92697796-2203-11ea-b48c-fa163e3940e5-registries API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-ssh Machine Config Selector: Match Labels: machineconfiguration.openshift.io/role: worker Node Selector: Match Labels: node-role.kubernetes.io/worker: Paused: false Status: Conditions: Last Transition Time: 2019-12-19T02:03:27Z Message: Reason: Status: False Type: RenderDegraded Last Transition Time: 2019-12-19T02:03:43Z Message: Reason: Status: False Type: NodeDegraded Last Transition Time: 2019-12-19T02:03:43Z Message: Reason: Status: False Type: Degraded Last Transition Time: 2019-12-19T02:28:23Z Message: Reason: Status: False Type: Updated Last Transition Time: 2019-12-19T02:28:23Z Message: All nodes are updating to rendered-worker-f6819366eb455a401c42f8d96ab25c02 Reason: Status: True Type: Updating Configuration: Name: rendered-worker-d9b3f4ffcfd65c30dcf591a0e8cf9b2e Source: API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 00-worker API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-container-runtime API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-kubelet API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-92697796-2203-11ea-b48c-fa163e3940e5-registries API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-ssh Degraded Machine Count: 0 Machine Count: 1 Observed Generation: 3 Ready Machine Count: 0 Unavailable Machine Count: 1 Updated Machine Count: 0 Events: <none>", "oc describe machineconfigpool/worker", "Last Transition Time: 2019-12-19T04:53:09Z Message: All nodes are updated with rendered-worker-f6819366eb455a401c42f8d96ab25c02 Reason: Status: True Type: Updated Last Transition Time: 2019-12-19T04:53:09Z Message: Reason: Status: False Type: Updating Configuration: Name: rendered-worker-f6819366eb455a401c42f8d96ab25c02 Source: API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 00-worker API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-container-runtime API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-kubelet API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 51-worker-rh-registry-trust API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-92697796-2203-11ea-b48c-fa163e3940e5-registries API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-ssh Degraded Machine Count: 0 Machine Count: 3 Observed Generation: 4 Ready Machine Count: 3 Unavailable Machine Count: 0 Updated Machine Count: 3", "oc debug node/<node> -- chroot /host cat /etc/containers/policy.json", "Starting pod/<node>-debug To use host binaries, run `chroot /host` { \"default\": [ { \"type\": \"insecureAcceptAnything\" } ], \"transports\": { \"docker\": { \"registry.access.redhat.com\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release\" } ], \"registry.redhat.io\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release\" } ] }, \"docker-daemon\": { \"\": [ { \"type\": \"insecureAcceptAnything\" } ] } } }", "oc debug node/<node> -- chroot /host cat /etc/containers/registries.d/registry.redhat.io.yaml", "Starting pod/<node>-debug To use host binaries, run `chroot /host` docker: registry.redhat.io: sigstore: https://registry.redhat.io/containers/sigstore", "oc debug node/<node> -- chroot /host cat /etc/containers/registries.d/registry.access.redhat.com.yaml", "Starting pod/<node>-debug To use host binaries, run `chroot /host` docker: registry.access.redhat.com: sigstore: https://access.redhat.com/webassets/docker/content/sigstore", "oc adm release info quay.io/openshift-release-dev/ocp-release@sha256:2309578b68c5666dad62aed696f1f9d778ae1a089ee461060ba7b9514b7ca417 -o pullspec 1 quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9aafb914d5d7d0dec4edd800d02f811d7383a7d49e500af548eab5d00c1bffdb 2", "oc adm release info <release_version> \\ 1", "--- Pull From: quay.io/openshift-release-dev/ocp-release@sha256:e73ab4b33a9c3ff00c9f800a38d69853ca0c4dfa5a88e3df331f66df8f18ec55 ---", "curl -o pub.key https://access.redhat.com/security/data/fd431d51.txt", "curl -o signature-1 https://mirror.openshift.com/pub/openshift-v4/signatures/openshift-release-dev/ocp-release/sha256%<sha_from_version>/signature-1 \\ 1", "skopeo inspect --raw docker://<quay_link_to_release> > manifest.json \\ 1", "skopeo standalone-verify manifest.json quay.io/openshift-release-dev/ocp-release:<release_number>-<arch> any signature-1 --public-key-file pub.key", "Signature verified using fingerprint 567E347AD0044ADE55BA8A5F199E2F91FD431D51, digest sha256:e73ab4b33a9c3ff00c9f800a38d69853ca0c4dfa5a88e3df331f66df8f18ec55", "quality.images.openshift.io/<qualityType>.<providerId>: {}", "quality.images.openshift.io/vulnerability.blackduck: {} quality.images.openshift.io/vulnerability.jfrog: {} quality.images.openshift.io/license.blackduck: {} quality.images.openshift.io/vulnerability.openscap: {}", "{ \"name\": \"OpenSCAP\", \"description\": \"OpenSCAP vulnerability score\", \"timestamp\": \"2016-09-08T05:04:46Z\", \"reference\": \"https://www.open-scap.org/930492\", \"compliant\": true, \"scannerVersion\": \"1.2\", \"summary\": [ { \"label\": \"critical\", \"data\": \"4\", \"severityIndex\": 3, \"reference\": null }, { \"label\": \"important\", \"data\": \"12\", \"severityIndex\": 2, \"reference\": null }, { \"label\": \"moderate\", \"data\": \"8\", \"severityIndex\": 1, \"reference\": null }, { \"label\": \"low\", \"data\": \"26\", \"severityIndex\": 0, \"reference\": null } ] }", "{ \"name\": \"Red Hat Ecosystem Catalog\", \"description\": \"Container health index\", \"timestamp\": \"2016-09-08T05:04:46Z\", \"reference\": \"https://access.redhat.com/errata/RHBA-2016:1566\", \"compliant\": null, \"scannerVersion\": \"1.2\", \"summary\": [ { \"label\": \"Health index\", \"data\": \"B\", \"severityIndex\": 1, \"reference\": null } ] }", "oc annotate image <image> quality.images.openshift.io/vulnerability.redhatcatalog='{ \"name\": \"Red Hat Ecosystem Catalog\", \"description\": \"Container health index\", \"timestamp\": \"2020-06-01T05:04:46Z\", \"compliant\": null, \"scannerVersion\": \"1.2\", \"reference\": \"https://access.redhat.com/errata/RHBA-2020:2347\", \"summary\": \"[ { \"label\": \"Health index\", \"data\": \"B\", \"severityIndex\": 1, \"reference\": null } ]\" }'", "annotations: images.openshift.io/deny-execution: true", "curl -X PATCH -H \"Authorization: Bearer <token>\" -H \"Content-Type: application/merge-patch+json\" https://<openshift_server>:6443/apis/image.openshift.io/v1/images/<image_id> --data '{ <image_annotation> }'", "{ \"metadata\": { \"annotations\": { \"quality.images.openshift.io/vulnerability.redhatcatalog\": \"{ 'name': 'Red Hat Ecosystem Catalog', 'description': 'Container health index', 'timestamp': '2020-06-01T05:04:46Z', 'compliant': null, 'reference': 'https://access.redhat.com/errata/RHBA-2020:2347', 'summary': [{'label': 'Health index', 'data': '4', 'severityIndex': 1, 'reference': null}] }\" } } }", "oc create secret generic secret-npmrc --from-file=.npmrc=~/.npmrc", "source: git: uri: https://github.com/sclorg/nodejs-ex.git secrets: - destinationDir: . secret: name: secret-npmrc", "oc new-build openshift/nodejs-010-centos7~https://github.com/sclorg/nodejs-ex.git --build-secret secret-npmrc", "oc set triggers deploy/deployment-example --from-image=example:latest --containers=web", "{ \"default\": [{\"type\": \"reject\"}], \"transports\": { \"docker\": { \"access.redhat.com\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release\" } ] }, \"atomic\": { \"172.30.1.1:5000/openshift\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release\" } ], \"172.30.1.1:5000/production\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/example.com/pubkey\" } ], \"172.30.1.1:5000\": [{\"type\": \"reject\"}] } } }", "docker: access.redhat.com: sigstore: https://access.redhat.com/webassets/docker/content/sigstore", "oc get event -n default | grep Node", "1h 20h 3 origin-node-1.example.local Node Normal NodeHasDiskPressure", "oc get events -n default -o json | jq '.items[] | select(.involvedObject.kind == \"Node\" and .reason == \"NodeHasDiskPressure\")'", "{ \"apiVersion\": \"v1\", \"count\": 3, \"involvedObject\": { \"kind\": \"Node\", \"name\": \"origin-node-1.example.local\", \"uid\": \"origin-node-1.example.local\" }, \"kind\": \"Event\", \"reason\": \"NodeHasDiskPressure\", }", "oc get events --all-namespaces -o json | jq '[.items[] | select(.involvedObject.kind == \"Pod\" and .reason == \"Pulling\")] | length'", "4" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/security_and_compliance/container-security-1
4.3. Red Hat Virtualization 4.3 Batch Update 2 (ovirt-4.3.5)
4.3. Red Hat Virtualization 4.3 Batch Update 2 (ovirt-4.3.5) 4.3.1. Bug Fix The items listed in this section are bugs that were addressed in this release: BZ# 1630824 In this release, engine-backup correctly backs up and restores configuration and data files of Open vSwitch and ovirt-provider-ovn. BZ# 1667489 Previously, although SSO tokens are supposed to expire after a period of user inactivity that is defined in engine-config, the VM portal sent a request every minute. Consequently, the SSO token never expired on the VM portal, and the VM portal continued running even when was unused in the background. This bug is now fixed. When the user does not actively use the VM portal for a period of time defined in engine-config, the VM portal presents a prompt. The user is automatically logged out after 30 seconds unless choosing to stay logged in. BZ# 1674352 When a user edited the initial run data of a Virtual Machine created by a pool, it caused ambiguous results by allowing different Virtual Machines to have different values within the same pool even though they were created when creating the pool. In this release, the user cannot modify an individual Virtual Machine's initial run data when the Virtual Machine is part of a pool. BZ# 1674386 Previously, the Affinity Rules Enforcer tried to migrate only one Virtual Machine, but if the migration failed, it did not attempt another migration. In this release, the Affinity Rules Enforcer tries to migrate multiple Virtual Machines until a migration succeeds. BZ# 1699684 When updating an existing Cluster, all of the Virtual Machines residing on the cluster were also updated. However, when each Virtual Machine was initialized for updating, the initial run data was not loaded, and therefore appeared to be empty. In this release, there is an added initialization of the initial run data for each Virtual Machine before calling the VM Update functionality from the update cluster. This results in the initial run data being preserved and not deleted. BZ# 1700461 Previously, migration failed with an XDG_RUNTIME_DIR error in the virt-v2v log. The current release fixes this error by dropping XDG_RUNTIME_DIR from the environment. BZ# 1712667 Previously, hosted-engine-setup automatically configured only the management network on the host used at restore time. If the backup file contained references to additional required logical networks, the missing networks prevented the host from booting. Now, hosted-engine-setup detects the missing networks and displays a hint to enable the user to connect to the engine manually, so the host can boot. BZ# 1716951 Previously, when lease data was moved from the VM Static to the VM Dynamic DB table, there was no consideration that upgrading from 4.1 to later versions would leave the lease data empty when a lease storage domain ID had been specified. This caused validation to fail when the VM launched, so that the VM no longer ran without the user resetting the lease storage domain ID. Consequently, HA VMs with lease storage domain IDs failed to execute. This bug is now fixed, such that validation no longer takes place when the VM runs, and the lease data is automatically regenerated when the lease storage domain ID is set. After the lease data is regenerated, the VM has the information it needs to run. Now, after upgrading from 4.1 to later versions, HA VMs with lease storage domain IDs execute normally. BZ# 1718829 Previously several important packages were removed when disabling RHV conversion hosts. This bug is now fixed. BZ# 1721362 When the host running the engine Virtual Machine was set into Maintenance Mode from the engine, the engine Virtual Machine was going to be migrated by the ovirt-ha-agent as an indirect action caused by Maintenance Mode. In this release, the engine has full control of the migration process. BZ# 1722173 Previously, iperf3 RPMs were missing from the optional Red Hat Virtualization Host (RHVH) repository, rhel-7-server-rhvh-4-rpms. The current release adds the iperf3 package to the RHVH image instead of the optional repository. BZ# 1722933 Previously, the ovirt-iso-uploader tool did not parse ssh login credentials correctly, and consequently, you could not use it to upload ISO images. This bug has been fixed, so that now you can upload ISO images. BZ# 1723322 In this release, the directory /var/lib/ovirt-hosted-engine-setup/cockpit is created with read permissions only for user 'root'. Non 'root' users cannot view this directory. BZ# 1723873 In Manager in RHV 4.1 and earlier, DiskType is an int value, while from RHV 4.2 and later, DiskType is a string value. Consequently, using RHV 4.3 hosts with Manager in RHV 4.1 causes the VDSM error "Invalid parameter: 'DiskType=2'" In this release, DiskType is once more an int value, so RHV 4.3 hosts can now work with Manager in RHV 4.1. BZ# 1725660 The Red Hat Virtualization REST API Guide did not include the all_content attribute of the HostNicService, and it was not possible to use this parameter with ovirt-engine-sdk-python. The REST API Guide has been updated to include the all_content parameter as part of HostNicService. BZ# 1725954 Previously, the libvirt-admin package was missing from the optional Red Hat Virtualization Host (RHVH) repository, rhel-7-server-rhvh-4*. The current release adds libvirt-admin to the RHVH image instead of the optional repository. 4.3.2. Enhancements This release of Red Hat Virtualization features the following enhancements: BZ# 1651747 In this release, the Affinity Enforcement process now includes Soft Virtual Machine Affinity. BZ# 1688264 In this release, the minimum supported version number for the virt-viewer for Red Hat Enterprise Linux 8 has been added to the list of supported versions in the console.vv file that is displayed when a Virtual Machine console is triggered. BZ# 1713213 In this release, a new notification has been added when loading a large number of LUNs: "Loading... A large number of LUNs may slow down the operation." BZ# 1719735 In this release, a bootable Storage Domain is set as the default lease Storage Domain when HA is selected for a new Virtual Machine. 4.3.3. Rebase: Bug Fixes and Enhancements The items listed in this section are bugs or enhancements that were originally resolved or introduced in the community version and included in this release. BZ# 1717763 Rebase package(s) to version: openvswitch to 2.11 ovn to 2.11 BZ# 1728283 The current release rebases ovirt-host for oVirt 4.3.4. Release Notes: https://ovirt.org/release/4.3.4/ 4.3.4. Rebase: Bug Fixes Only The items listed in this section are bugs that were originally resolved in the community version and included in this release. BZ# 1710696 In this release, when updating a Virtual Machine using a REST API, not specifying the console value now means that the console state should not be changed. As a result, the console keeps its state. BZ# 1719726 Previously, engine-setup included one or more incorrect URLs to documentation. These URLs have been fixed. 4.3.5. Release Notes This section outlines important details about the release, including recommended practices and notable changes to Red Hat Virtualization. You must take this information into account to ensure the best possible outcomes for your deployment. BZ# 1723345 This is the initial release of the Red Hat Virtualization Python SDK for Red Hat Enterprise Linux 8. BZ# 1723349 This is the initial release of the Red Hat Virtualization Ruby SDK for RHEL 8. 4.3.6. Known Issues These known issues exist in Red Hat Virtualization at this time: BZ# 1716608 While installing metrics store according to the documented installation instructions, the playbook cannot locate a filter and therefore fails when it tries to upload the template image. If you run the same role (image-template) outside the Metrics installation playbook with a simple test playbook, the filter is found, and the task completes successfully. There is no workaround at this time.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/release_notes/red_hat_virtualization_4_3_batch_update_2_ovirt_4_3_5
10.5.31. CacheNegotiatedDocs
10.5.31. CacheNegotiatedDocs By default, the Web server asks proxy servers not to cache any documents which were negotiated on the basis of content (that is, they may change over time or because of the input from the requester). If CacheNegotiatedDocs is set to on , this function is disabled and proxy servers are allowed to cache such documents.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-apache-cachenegotiateddocs
Chapter 1. Configuring resource quota or requests
Chapter 1. Configuring resource quota or requests With the Argo CD custom resource (CR), you can create, update, and delete resource requests and limits for Argo CD workloads. 1.1. Configuring workloads with resource requests and limits You can create Argo CD custom resource workloads with resource requests and limits. This is required when you want to deploy the Argo CD instance in a namespace that is configured with resource quotas. The following Argo CD instance deploys the Argo CD workloads such as Application Controller , ApplicationSet Controller , Dex , Redis , Repo Server , and Server with resource requests and limits. You can also create the other workloads with resource requirements in the same manner. apiVersion: argoproj.io/v1beta1 kind: ArgoCD metadata: name: example spec: server: resources: limits: cpu: 500m memory: 256Mi requests: cpu: 125m memory: 128Mi route: enabled: true applicationSet: resources: limits: cpu: '2' memory: 1Gi requests: cpu: 250m memory: 512Mi repo: resources: limits: cpu: '1' memory: 512Mi requests: cpu: 250m memory: 256Mi dex: resources: limits: cpu: 500m memory: 256Mi requests: cpu: 250m memory: 128Mi redis: resources: limits: cpu: 500m memory: 256Mi requests: cpu: 250m memory: 128Mi controller: resources: limits: cpu: '2' memory: 2Gi requests: cpu: 250m memory: 1Gi 1.2. Patching Argo CD instance to update the resource requirements You can update the resource requirements for all or any of the workloads post installation. Procedure Update the Application Controller resource requests of an Argo CD instance in the Argo CD namespace. oc -n argocd patch argocd example --type='json' -p='[{"op": "replace", "path": "/spec/controller/resources/requests/cpu", "value":"1"}]' oc -n argocd patch argocd example --type='json' -p='[{"op": "replace", "path": "/spec/controller/resources/requests/memory", "value":"512Mi"}]' 1.3. Removing resource requests You can also remove resource requirements for all or any of your workloads after installation. Procedure Remove the Application Controller resource requests of an Argo CD instance in the Argo CD namespace. oc -n argocd patch argocd example --type='json' -p='[{"op": "remove", "path": "/spec/controller/resources/requests/cpu"}]' oc -n argocd argocd patch argocd example --type='json' -p='[{"op": "remove", "path": "/spec/controller/resources/requests/memory"}]'
[ "apiVersion: argoproj.io/v1beta1 kind: ArgoCD metadata: name: example spec: server: resources: limits: cpu: 500m memory: 256Mi requests: cpu: 125m memory: 128Mi route: enabled: true applicationSet: resources: limits: cpu: '2' memory: 1Gi requests: cpu: 250m memory: 512Mi repo: resources: limits: cpu: '1' memory: 512Mi requests: cpu: 250m memory: 256Mi dex: resources: limits: cpu: 500m memory: 256Mi requests: cpu: 250m memory: 128Mi redis: resources: limits: cpu: 500m memory: 256Mi requests: cpu: 250m memory: 128Mi controller: resources: limits: cpu: '2' memory: 2Gi requests: cpu: 250m memory: 1Gi", "-n argocd patch argocd example --type='json' -p='[{\"op\": \"replace\", \"path\": \"/spec/controller/resources/requests/cpu\", \"value\":\"1\"}]' -n argocd patch argocd example --type='json' -p='[{\"op\": \"replace\", \"path\": \"/spec/controller/resources/requests/memory\", \"value\":\"512Mi\"}]'", "-n argocd patch argocd example --type='json' -p='[{\"op\": \"remove\", \"path\": \"/spec/controller/resources/requests/cpu\"}]' -n argocd argocd patch argocd example --type='json' -p='[{\"op\": \"remove\", \"path\": \"/spec/controller/resources/requests/memory\"}]'" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_gitops/1.12/html/managing_resource_use/configuring-resource-quota
11.5. Mail User Agents
11.5. Mail User Agents There are scores of mail programs available under Red Hat Enterprise Linux. There are full-featured, graphical email client programs, such as Mozilla Mail or Ximian Evolution , as well as text-based email programs such as mutt . The remainder of this section focuses on securing communication between the client and server. 11.5.1. Securing Communication Popular MUAs included with Red Hat Enterprise Linux, such as Mozilla Mail , Ximian Evolution , and mutt offer SSL-encrypted email sessions. Like any other service that flows over a network unencrypted, important email information, such as usernames, passwords, and entire messages, may be intercepted and viewed by users on the network. Additionally, since the standard POP and IMAP protocols pass authentication information unencrypted, it is possible for an attacker to gain access to user accounts by collecting usernames and passwords as they are passed over the network. 11.5.1.1. Secure Email Clients Most Linux MUAs designed to check email on remote servers support SSL encryption. To use SSL when retrieving email, it must be enabled on both the email client and server. SSL is easy to enable on the client-side, often done with the click of a button in the MUA's configuration window or via an option in the MUA's configuration file. Secure IMAP and POP have known port numbers (993 and 995, respectively) that the MUA uses to authenticate and download messages. 11.5.1.2. Securing Email Client Communications Offering SSL encryption to IMAP and POP users on the email server is a simple matter. First, create an SSL certificate. This can be done two ways: by applying to a Certificate Authority ( CA ) for an SSL certificate or by creating a self-signed certificate. Warning Self-signed certificates should be used for testing purposes only. Any server used in a production environment should use an SSL certificate granted by a CA. To create a self-signed SSL certificate for IMAP, change to the /usr/share/ssl/certs/ directory and type the following commands as root: Answer all of the questions to complete the process. To create a self-signed SSL certificate for POP, change to the /usr/share/ssl/certs/ directory, and type the following commands as root: Again, answer all of the questions to complete the process. Important Please be sure to remove the default imapd.pem and ipop3d.pem files before issuing each make command. Once finished, execute the /sbin/service xinetd restart command to restart the xinetd daemon which controls imapd and ipop3d . Alternatively, the stunnel command can be used as an SSL encryption wrapper around the standard, non-secure daemons, imapd or pop3d . The stunnel program uses external OpenSSL libraries included with Red Hat Enterprise Linux to provide strong cryptography and protect the connections. It is best to apply to a CA to obtain an SSL certificate, but it is also possible to create a self-signed certificate. To create a self-signed SSL certificate, change to the /usr/share/ssl/certs/ directory, and type the following command: Again, answer all of the questions to complete the process. Once the certificate is generated, it is possible to use the stunnel command to start the imapd mail daemon using the following command: Once this command is issued, it is possible to open an IMAP email client and connect to the email server using SSL encryption. To start the pop3d using the stunnel command, type the following command: For more information about how to use stunnel , read the stunnel man page or refer to the documents in the /usr/share/doc/stunnel- <version-number> / directory, where <version-number> is the version number for stunnel .
[ "rm -f imapd.pem make imapd.pem", "rm -f ipop3d.pem make ipop3d.pem", "make stunnel.pem", "/usr/sbin/stunnel -d 993 -l /usr/sbin/imapd imapd", "/usr/sbin/stunnel -d 995 -l /usr/sbin/pop3d pop3d" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s1-email-mua
Using cost models
Using cost models Cost Management Service 1-latest Configuring cost models to reflect your cloud costs Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/cost_management_service/1-latest/html-single/using_cost_models/index
Chapter 8. Configuring instance scheduling and placement
Chapter 8. Configuring instance scheduling and placement The Compute scheduler service determines on which Compute node or host aggregate to place an instance. When the Compute (nova) service receives a request to launch or move an instance, it uses the specifications provided in the request, the flavor, and the image to find a suitable host. For example, a flavor can specify the traits an instance requires a host to have, such as the type of storage disk, or the Intel CPU instruction set extension. The Compute scheduler service uses the configuration of the following components, in the following order, to determine on which Compute node to launch or move an instance: Placement service prefilters : The Compute scheduler service uses the Placement service to filter the set of candidate Compute nodes based on specific attributes. For example, the Placement service automatically excludes disabled Compute nodes. Filters : Used by the Compute scheduler service to determine the initial set of Compute nodes on which to launch an instance. Weights : The Compute scheduler service prioritizes the filtered Compute nodes using a weighting system. The highest weight has the highest priority. In the following diagram, host 1 and 3 are eligible after filtering. Host 1 has the highest weight and therefore has the highest priority for scheduling. 8.1. Prefiltering using the Placement service The Compute service (nova) interacts with the Placement service when it creates and manages instances. The Placement service tracks the inventory and usage of resource providers, such as a Compute node, a shared storage pool, or an IP allocation pool, and their available quantitative resources, such as the available vCPUs. Any service that needs to manage the selection and consumption of resources can use the Placement service. The Placement service also tracks the mapping of available qualitative resources to resource providers, such as the type of storage disk trait a resource provider has. The Placement service applies prefilters to the set of candidate Compute nodes based on Placement service resource provider inventories and traits. You can create prefilters based on the following criteria: Supported image types Traits Projects or tenants Availability zone 8.1.1. Filtering by requested image type support You can exclude Compute nodes that do not support the disk format of the image used to launch an instance. This is useful when your environment uses Red Hat Ceph Storage as an ephemeral backend, which does not support QCOW2 images. Enabling this feature ensures that the scheduler does not send requests to launch instances using a QCOW2 image to Compute nodes backed by Red Hat Ceph Storage. Procedure Open your Compute environment file. To exclude Compute nodes that do not support the disk format of the image used to launch an instance, set the NovaSchedulerQueryImageType parameter to True in the Compute environment file. Save the updates to your Compute environment file. Add your Compute environment file to the stack with your other environment files and deploy the overcloud: 8.1.2. Filtering by resource provider traits Each resource provider has a set of traits. Traits are the qualitative aspects of a resource provider, for example, the type of storage disk, or the Intel CPU instruction set extension. The Compute node reports its capabilities to the Placement service as traits. An instance can specify which of these traits it requires, or which traits the resource provider must not have. The Compute scheduler can use these traits to identify a suitable Compute node or host aggregate to host an instance. To enable your cloud users to create instances on hosts that have particular traits, you can define a flavor that requires or forbids a particular trait, and you can create an image that requires or forbids a particular trait. For a list of the available traits, see the os-traits library . You can also create custom traits, as required. 8.1.2.1. Creating an image that requires or forbids a resource provider trait You can create an instance image that your cloud users can use to launch instances on hosts that have particular traits. Procedure Create a new image: Identify the trait you require a host or host aggregate to have. You can select an existing trait, or create a new trait: To use an existing trait, list the existing traits to retrieve the trait name: To create a new trait, enter the following command: Custom traits must begin with the prefix CUSTOM_ and contain only the letters A through Z, the numbers 0 through 9 and the underscore "_" character. Collect the existing resource provider traits of each host: Check the existing resource provider traits for the traits you require a host or host aggregate to have: If the traits you require are not already added to the resource provider, then add the existing traits and your required traits to the resource providers for each host: Replace <TRAIT_NAME> with the name of the trait that you want to add to the resource provider. You can use the --trait option more than once to add additional traits, as required. Note This command performs a full replacement of the traits for the resource provider. Therefore, you must retrieve the list of existing resource provider traits on the host and set them again to prevent them from being removed. To schedule instances on a host or host aggregate that has a required trait, add the trait to the image extra specs. For example, to schedule instances on a host or host aggregate that supports AVX-512, add the following trait to the image extra specs: To filter out hosts or host aggregates that have a forbidden trait, add the trait to the image extra specs. For example, to prevent instances from being scheduled on a host or host aggregate that supports multi-attach volumes, add the following trait to the image extra specs: 8.1.2.2. Creating a flavor that requires or forbids a resource provider trait You can create flavors that your cloud users can use to launch instances on hosts that have particular traits. Procedure Create a flavor: Identify the trait you require a host or host aggregate to have. You can select an existing trait, or create a new trait: To use an existing trait, list the existing traits to retrieve the trait name: To create a new trait, enter the following command: Custom traits must begin with the prefix CUSTOM_ and contain only the letters A through Z, the numbers 0 through 9 and the underscore "_" character. Collect the existing resource provider traits of each host: Check the existing resource provider traits for the traits you require a host or host aggregate to have: If the traits you require are not already added to the resource provider, then add the existing traits and your required traits to the resource providers for each host: Replace <TRAIT_NAME> with the name of the trait that you want to add to the resource provider. You can use the --trait option more than once to add additional traits, as required. Note This command performs a full replacement of the traits for the resource provider. Therefore, you must retrieve the list of existing resource provider traits on the host and set them again to prevent them from being removed. To schedule instances on a host or host aggregate that has a required trait, add the trait to the flavor extra specs. For example, to schedule instances on a host or host aggregate that supports AVX-512, add the following trait to the flavor extra specs: To filter out hosts or host aggregates that have a forbidden trait, add the trait to the flavor extra specs. For example, to prevent instances from being scheduled on a host or host aggregate that supports multi-attach volumes, add the following trait to the flavor extra specs: 8.1.3. Filtering by isolating host aggregates You can restrict scheduling on a host aggregate to only those instances whose flavor and image traits match the metadata of the host aggregate. The combination of flavor and image metadata must require all the host aggregate traits to be eligible for scheduling on Compute nodes in that host aggregate. Procedure Open your Compute environment file. To isolate host aggregates to host only instances whose flavor and image traits match the aggregate metadata, set the NovaSchedulerEnableIsolatedAggregateFiltering parameter to True in the Compute environment file. Save the updates to your Compute environment file. Add your Compute environment file to the stack with your other environment files and deploy the overcloud: Identify the traits you want to isolate the host aggregate for. You can select an existing trait, or create a new trait: To use an existing trait, list the existing traits to retrieve the trait name: To create a new trait, enter the following command: Custom traits must begin with the prefix CUSTOM_ and contain only the letters A through Z, the numbers 0 through 9 and the underscore "_" character. Collect the existing resource provider traits of each Compute node: Check the existing resource provider traits for the traits you want to isolate the host aggregate for: If the traits you require are not already added to the resource provider, then add the existing traits and your required traits to the resource providers for each Compute node in the host aggregate: Replace <TRAIT_NAME> with the name of the trait that you want to add to the resource provider. You can use the --trait option more than once to add additional traits, as required. Note This command performs a full replacement of the traits for the resource provider. Therefore, you must retrieve the list of existing resource provider traits on the host and set them again to prevent them from being removed. Repeat steps 6 - 8 for each Compute node in the host aggregate. Add the metadata property for the trait to the host aggregate: Add the trait to a flavor or an image: 8.1.4. Filtering by availability zone using the Placement service You can use the Placement service to honor availability zone requests. To use the Placement service to filter by availability zone, placement aggregates must exist that match the membership and UUID of the availability zone host aggregates. Procedure Open your Compute environment file. To use the Placement service to filter by availability zone, set the NovaSchedulerQueryPlacementForAvailabilityZone parameter to True in the Compute environment file. Remove the AvailabilityZoneFilter filter from the NovaSchedulerEnabledFilters parameter. Save the updates to your Compute environment file. Add your Compute environment file to the stack with your other environment files and deploy the overcloud: Additional resources For more information on creating a host aggregate to use as an availability zone, see Creating an availability zone . 8.2. Configuring filters and weights for the Compute scheduler service You need to configure the filters and weights for the Compute scheduler service to determine the initial set of Compute nodes on which to launch an instance. Procedure Open your Compute environment file. Add the filters you want the scheduler to use to the NovaSchedulerEnabledFilters parameter, for example: Specify which attribute to use to calculate the weight of each Compute node, for example: For more information on the available attributes, see Compute scheduler weights . Optional: Configure the multiplier to apply to each weigher. For example, to specify that the available RAM of a Compute node has a higher weight than the other default weighers, and that the Compute scheduler prefers Compute nodes with more available RAM over those nodes with less available RAM, use the following configuration: Tip You can also set multipliers to a negative value. In the above example, to prefer Compute nodes with less available RAM over those nodes with more available RAM, set ram_weight_multiplier to -2.0 . Save the updates to your Compute environment file. Add your Compute environment file to the stack with your other environment files and deploy the overcloud: Additional resources For a list of the available Compute scheduler service filters, see Compute scheduler filters . For a list of the available weight configuration options, see Compute scheduler weights . 8.3. Compute scheduler filters You configure the NovaSchedulerEnabledFilters parameter in your Compute environment file to specify the filters the Compute scheduler must apply when selecting an appropriate Compute node to host an instance. The default configuration applies the following filters: AvailabilityZoneFilter : The Compute node must be in the requested availability zone. ComputeFilter : The Compute node can service the request. ComputeCapabilitiesFilter : The Compute node satisfies the flavor extra specs. ImagePropertiesFilter : The Compute node satisfies the requested image properties. ServerGroupAntiAffinityFilter : The Compute node is not already hosting an instance in a specified group. ServerGroupAffinityFilter : The Compute node is already hosting instances in a specified group. You can add and remove filters. The following table describes all the available filters. Table 8.1. Compute scheduler filters Filter Description AggregateImagePropertiesIsolation Use this filter to match the image metadata of an instance with host aggregate metadata. If any of the host aggregate metadata matches the metadata of the image, then the Compute nodes that belong to that host aggregate are candidates for launching instances from that image. The scheduler only recognises valid image metadata properties. For details on valid image metadata properties, see Image configuration parameters . AggregateInstanceExtraSpecsFilter Use this filter to match namespaced properties defined in the flavor extra specs of an instance with host aggregate metadata. You must scope your flavor extra_specs keys by prefixing them with the aggregate_instance_extra_specs: namespace. If any of the host aggregate metadata matches the metadata of the flavor extra spec, then the Compute nodes that belong to that host aggregate are candidates for launching instances from that image. AggregateIoOpsFilter Use this filter to filter hosts by I/O operations with a per-aggregate filter_scheduler/max_io_ops_per_host value. If the per-aggregate value is not found, the value falls back to the global setting. If the host is in more than one aggregate and more than one value is found, the scheduler uses the minimum value. AggregateMultiTenancyIsolation Use this filter to limit the availability of Compute nodes in project-isolated host aggregates to a specified set of projects. Only projects specified using the filter_tenant_id metadata key can launch instances on Compute nodes in the host aggregate. For more information, see Creating a project-isolated host aggregate . Note The project can still place instances on other hosts. To restrict this, use the NovaSchedulerPlacementAggregateRequiredForTenants parameter. AggregateNumInstancesFilter Use this filter to limit the number of instances each Compute node in an aggregate can host. You can configure the maximum number of instances per-aggregate by using the filter_scheduler/max_instances_per_host parameter. If the per-aggregate value is not found, the value falls back to the global setting. If the Compute node is in more than one aggregate, the scheduler uses the lowest max_instances_per_host value. AggregateTypeAffinityFilter Use this filter to pass hosts if no flavor metadata key is set, or the flavor aggregate metadata value contains the name of the requested flavor. The value of the flavor metadata entry is a string that may contain either a single flavor name or a comma-separated list of flavor names, such as m1.nano or m1.nano,m1.small . AllHostsFilter Use this filter to consider all available Compute nodes for instance scheduling. Note Using this filter does not disable other filters. AvailabilityZoneFilter Use this filter to launch instances on a Compute node in the availability zone specified by the instance. ComputeCapabilitiesFilter Use this filter to match namespaced properties defined in the flavor extra specs of an instance against the Compute node capabilities. You must prefix the flavor extra specs with the capabilities: namespace. A more efficient alternative to using the ComputeCapabilitiesFilter filter is to use CPU traits in your flavors, which are reported to the Placement service. Traits provide consistent naming for CPU features. For more information, see Filtering by using resource provider traits . ComputeFilter Use this filter to pass all Compute nodes that are operational and enabled. This filter should always be present. DifferentHostFilter Use this filter to enable scheduling of an instance on a different Compute node from a set of specific instances. To specify these instances when launching an instance, use the --hint argument with different_host as the key and the instance UUID as the value: ImagePropertiesFilter Use this filter to filter Compute nodes based on the following properties defined on the instance image: hw_architecture - Corresponds to the architecture of the host, for example, x86, ARM, and Power. img_hv_type - Corresponds to the hypervisor type, for example, KVM, QEMU, Xen, and LXC. img_hv_requested_version - Corresponds to the hypervisor version the Compute service reports. hw_vm_mode - Corresponds to the hyperviser type, for example hvm, xen, uml, or exe. Compute nodes that can support the specified image properties contained in the instance are passed to the scheduler. For more information on image properties, see Image configuration parameters . IsolatedHostsFilter Use this filter to only schedule instances with isolated images on isolated Compute nodes. You can also prevent non-isolated images from being used to build instances on isolated Compute nodes by configuring filter_scheduler/restrict_isolated_hosts_to_isolated_images . To specify the isolated set of images and hosts use the filter_scheduler/isolated_hosts and filter_scheduler/isolated_images configuration options, for example: IoOpsFilter Use this filter to filter out hosts that have concurrent I/O operations that exceed the configured filter_scheduler/max_io_ops_per_host , which specifies the maximum number of I/O intensive instances allowed to run on the host. MetricsFilter Use this filter to limit scheduling to Compute nodes that report the metrics configured by using metrics/weight_setting . To use this filter, add the following configuration to your Compute environment file: By default, the Compute scheduler service updates the metrics every 60 seconds. To ensure the metrics are up-to-date, you can increase the frequency at which the metrics data is refreshed using the update_resources_interval configuration option. For example, use the following configuration to refresh the metrics data every 2 seconds: NUMATopologyFilter Use this filter to schedule instances with a NUMA topology on NUMA-capable Compute nodes. Use flavor extra_specs and image properties to specify the NUMA topology for an instance. The filter tries to match the instance NUMA topology to the Compute node topology, taking into consideration the over-subscription limits for each host NUMA cell. NumInstancesFilter Use this filter to filter out Compute nodes that have more instances running than specified by the max_instances_per_host option. PciPassthroughFilter Use this filter to schedule instances on Compute nodes that have the devices that the instance requests by using the flavor extra_specs . Use this filter if you want to reserve nodes with PCI devices, which are typically expensive and limited, for instances that request them. SameHostFilter Use this filter to enable scheduling of an instance on the same Compute node as a set of specific instances. To specify these instances when launching an instance, use the --hint argument with same_host as the key and the instance UUID as the value: ServerGroupAffinityFilter Use this filter to schedule instances in an affinity server group on the same Compute node. To create the server group, enter the following command: To launch an instance in this group, use the --hint argument with group as the key and the group UUID as the value: ServerGroupAntiAffinityFilter Use this filter to schedule instances that belong to an anti-affinity server group on different Compute nodes. To create the server group, enter the following command: To launch an instance in this group, use the --hint argument with group as the key and the group UUID as the value: SimpleCIDRAffinityFilter Use this filter to schedule instances on Compute nodes that have a specific IP subnet range. To specify the required range, use the --hint argument to pass the keys build_near_host_ip and cidr when launching an instance: 8.4. Compute scheduler weights Each Compute node has a weight that the scheduler can use to prioritize instance scheduling. After the Compute scheduler applies the filters, it selects the Compute node with the largest weight from the remaining candidate Compute nodes. The Compute scheduler determines the weight of each Compute node by performing the following tasks: The scheduler normalizes each weight to a value between 0.0 and 1.0. The scheduler multiplies the normalized weight by the weigher multiplier. The Compute scheduler calculates the weight normalization for each resource type by using the lower and upper values for the resource availability across the candidate Compute nodes: Nodes with the lowest availability of a resource (minval) are assigned '0'. Nodes with the highest availability of a resource (maxval) are assigned '1'. Nodes with resource availability within the minval - maxval range are assigned a normalized weight calculated by using the following formula: If all the Compute nodes have the same availability for a resource then they are all normalized to 0. For example, the scheduler calculates the normalized weights for available vCPUs across 10 Compute nodes, each with a different number of available vCPUs, as follows: Compute node 1 2 3 4 5 6 7 8 9 10 No of vCPUs 5 5 10 10 15 20 20 15 10 5 Normalized weight 0 0 0.33 0.33 0.67 1 1 0.67 0.33 0 The Compute scheduler uses the following formula to calculate the weight of a Compute node: The following table describes the available configuration options for weights. Note Weights can be set on host aggregates using the aggregate metadata key with the same name as the options detailed in the following table. If set on the host aggregate, the host aggregate value takes precedence. Table 8.2. Compute scheduler weights Configuration option Type Description filter_scheduler/weight_classes String Use this parameter to configure which of the following attributes to use for calculating the weight of each Compute node: nova.scheduler.weights.ram.RAMWeigher - Weighs the available RAM on the Compute node. nova.scheduler.weights.cpu.CPUWeigher - Weighs the available CPUs on the Compute node. nova.scheduler.weights.disk.DiskWeigher - Weighs the available disks on the Compute node. nova.scheduler.weights.metrics.MetricsWeigher - Weighs the metrics of the Compute node. nova.scheduler.weights.affinity.ServerGroupSoftAffinityWeigher - Weighs the proximity of the Compute node to other nodes in the given instance group. nova.scheduler.weights.affinity.ServerGroupSoftAntiAffinityWeigher - Weighs the proximity of the Compute node to other nodes in the given instance group. nova.scheduler.weights.compute.BuildFailureWeigher - Weighs Compute nodes by the number of recent failed boot attempts. nova.scheduler.weights.io_ops.IoOpsWeigher - Weighs Compute nodes by their workload. nova.scheduler.weights.pci.PCIWeigher - Weighs Compute nodes by their PCI availability. nova.scheduler.weights.cross_cell.CrossCellWeigher - Weighs Compute nodes based on which cell they are in, giving preference to Compute nodes in the source cell when moving an instance. nova.scheduler.weights.all_weighers - (Default) Uses all the above weighers. filter_scheduler/ram_weight_multiplier Floating point Use this parameter to specify the multiplier to use to weigh hosts based on the available RAM. Set to a positive value to prefer hosts with more available RAM, which spreads instances across many hosts. Set to a negative value to prefer hosts with less available RAM, which fills up (stacks) hosts as much as possible before scheduling to a less-used host. The absolute value, whether positive or negative, controls how strong the RAM weigher is relative to other weighers. Default: 1.0 - The scheduler spreads instances across all hosts evenly. filter_scheduler/disk_weight_multiplier Floating point Use this parameter to specify the multiplier to use to weigh hosts based on the available disk space. Set to a positive value to prefer hosts with more available disk space, which spreads instances across many hosts. Set to a negative value to prefer hosts with less available disk space, which fills up (stacks) hosts as much as possible before scheduling to a less-used host. The absolute value, whether positive or negative, controls how strong the disk weigher is relative to other weighers. Default: 1.0 - The scheduler spreads instances across all hosts evenly. filter_scheduler/cpu_weight_multiplier Floating point Use this parameter to specify the multiplier to use to weigh hosts based on the available vCPUs. Set to a positive value to prefer hosts with more available vCPUs, which spreads instances across many hosts. Set to a negative value to prefer hosts with less available vCPUs, which fills up (stacks) hosts as much as possible before scheduling to a less-used host. The absolute value, whether positive or negative, controls how strong the vCPU weigher is relative to other weighers. Default: 1.0 - The scheduler spreads instances across all hosts evenly. filter_scheduler/io_ops_weight_multiplier Floating point Use this parameter to specify the multiplier to use to weigh hosts based on the host workload. Set to a negative value to prefer hosts with lighter workloads, which distributes the workload across more hosts. Set to a positive value to prefer hosts with heavier workloads, which schedules instances onto hosts that are already busy. The absolute value, whether positive or negative, controls how strong the I/O operations weigher is relative to other weighers. Default: -1.0 - The scheduler distributes the workload across more hosts. filter_scheduler/build_failure_weight_multiplier Floating point Use this parameter to specify the multiplier to use to weigh hosts based on recent build failures. Set to a positive value to increase the significance of build failures recently reported by the host. Hosts with recent build failures are then less likely to be chosen. Set to 0 to disable weighing compute hosts by the number of recent failures. Default: 1000000.0 filter_scheduler/cross_cell_move_weight_multiplier Floating point Use this parameter to specify the multiplier to use to weigh hosts during a cross-cell move. This option determines how much weight is placed on a host which is within the same source cell when moving an instance. By default, the scheduler prefers hosts within the same source cell when migrating an instance. Set to a positive value to prefer hosts within the same cell the instance is currently running. Set to a negative value to prefer hosts located in a different cell from that where the instance is currently running. Default: 1000000.0 filter_scheduler/pci_weight_multiplier Positive floating point Use this parameter to specify the multiplier to use to weigh hosts based on the number of PCI devices on the host and the number of PCI devices requested by an instance. If an instance requests PCI devices, then the more PCI devices a Compute node has the higher the weight allocated to the Compute node. For example, if there are three hosts available, one with a single PCI device, one with multiple PCI devices and one without any PCI devices, then the Compute scheduler prioritizes these hosts based on the demands of the instance. The scheduler should prefer the first host if the instance requests one PCI device, the second host if the instance requires multiple PCI devices and the third host if the instance does not request a PCI device. Configure this option to prevent non-PCI instances from occupying resources on hosts with PCI devices. Default: 1.0 filter_scheduler/host_subset_size Integer Use this parameter to specify the size of the subset of filtered hosts from which to select the host. You must set this option to at least 1. A value of 1 selects the first host returned by the weighing functions. The scheduler ignores any value less than 1 and uses 1 instead. Set to a value greater than 1 to prevent multiple scheduler processes handling similar requests selecting the same host, creating a potential race condition. By selecting a host randomly from the N hosts that best fit the request, the chance of a conflict is reduced. However, the higher you set this value, the less optimal the chosen host may be for a given request. Default: 1 filter_scheduler/soft_affinity_weight_multiplier Positive floating point Use this parameter to specify the multiplier to use to weigh hosts for group soft-affinity. Note You need to specify the microversion when creating a group with this policy: Default: 1.0 filter_scheduler/soft_anti_affinity_weight_multiplier Positive floating point Use this parameter to specify the multiplier to use to weigh hosts for group soft-anti-affinity. Note You need to specify the microversion when creating a group with this policy: Default: 1.0 metrics/weight_multiplier Floating point Use this parameter to specify the multiplier to use for weighting metrics. By default, weight_multiplier=1.0 , which spreads instances across possible hosts. Set to a number greater than 1.0 to increase the effect of the metric on the overall weight. Set to a number between 0.0 and 1.0 to reduce the effect of the metric on the overall weight. Set to 0.0 to ignore the metric value and return the value of the weight_of_unavailable option. Set to a negative number to prioritize the host with lower metrics, and stack instances in hosts. Default: 1.0 metrics/weight_setting Comma-separated list of metric=ratio pairs Use this parameter to specify the metrics to use for weighting, and the ratio to use to calculate the weight of each metric. Valid metric names: cpu.frequency - CPU frequency cpu.user.time - CPU user mode time cpu.kernel.time - CPU kernel time cpu.idle.time - CPU idle time cpu.iowait.time - CPU I/O wait time cpu.user.percent - CPU user mode percentage cpu.kernel.percent - CPU kernel percentage cpu.idle.percent - CPU idle percentage cpu.iowait.percent - CPU I/O wait percentage cpu.percent - Generic CPU use Example: weight_setting=cpu.user.time=1.0 metrics/required Boolean Use this parameter to specify how to handle configured metrics/weight_setting metrics that are unavailable: True - Metrics are required. If the metric is unavailable, an exception is raised. To avoid the exception, use the MetricsFilter filter in NovaSchedulerEnabledFilters . False - The unavailable metric is treated as a negative factor in the weighing process. Set the returned value by using the weight_of_unavailable configuration option. metrics/weight_of_unavailable Floating point Use this parameter to specify the weight to use if any metrics/weight_setting metric is unavailable, and metrics/required=False . Default: -10000.0
[ "(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/<compute_environment_file>.yaml", "(overcloud)USD openstack image create ... trait-image", "(overcloud)USD openstack --os-placement-api-version 1.6 trait list", "(overcloud)USD openstack --os-placement-api-version 1.6 trait create CUSTOM_TRAIT_NAME", "(overcloud)USD existing_traits=USD(openstack --os-placement-api-version 1.6 resource provider trait list -f value <host_uuid> | sed 's/^/--trait /')", "(overcloud)USD echo USDexisting_traits", "(overcloud)USD openstack --os-placement-api-version 1.6 resource provider trait set USDexisting_traits --trait <TRAIT_NAME> <host_uuid>", "(overcloud)USD openstack image set --property trait:HW_CPU_X86_AVX512BW=required trait-image", "(overcloud)USD openstack image set --property trait:COMPUTE_VOLUME_MULTI_ATTACH=forbidden trait-image", "(overcloud)USD openstack flavor create --vcpus 1 --ram 512 --disk 2 trait-flavor", "(overcloud)USD openstack --os-placement-api-version 1.6 trait list", "(overcloud)USD openstack --os-placement-api-version 1.6 trait create CUSTOM_TRAIT_NAME", "(overcloud)USD existing_traits=USD(openstack --os-placement-api-version 1.6 resource provider trait list -f value <host_uuid> | sed 's/^/--trait /')", "(overcloud)USD echo USDexisting_traits", "(overcloud)USD openstack --os-placement-api-version 1.6 resource provider trait set USDexisting_traits --trait <TRAIT_NAME> <host_uuid>", "(overcloud)USD openstack flavor set --property trait:HW_CPU_X86_AVX512BW=required trait-flavor", "(overcloud)USD openstack flavor set --property trait:COMPUTE_VOLUME_MULTI_ATTACH=forbidden trait-flavor", "(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/<compute_environment_file>.yaml", "(overcloud)USD openstack --os-placement-api-version 1.6 trait list", "(overcloud)USD openstack --os-placement-api-version 1.6 trait create CUSTOM_TRAIT_NAME", "(overcloud)USD existing_traits=USD(openstack --os-placement-api-version 1.6 resource provider trait list -f value <host_uuid> | sed 's/^/--trait /')", "(overcloud)USD echo USDexisting_traits", "(overcloud)USD openstack --os-placement-api-version 1.6 resource provider trait set USDexisting_traits --trait <TRAIT_NAME> <host_uuid>", "(overcloud)USD openstack --os-compute-api-version 2.53 aggregate set --property trait:<TRAIT_NAME>=required <aggregate_name>", "(overcloud)USD openstack flavor set --property trait:<TRAIT_NAME>=required <flavor> (overcloud)USD openstack image set --property trait:<TRAIT_NAME>=required <image>", "(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/<compute_environment_file>.yaml", "parameter_defaults: NovaSchedulerEnabledFilters: - AggregateInstanceExtraSpecsFilter - ComputeFilter - ComputeCapabilitiesFilter - ImagePropertiesFilter", "parameter_defaults: ComputeExtraConfig: nova::config::nova_config: filter_scheduler/weight_classes: value: nova.scheduler.weights.all_weighers", "parameter_defaults: ComputeExtraConfig: nova::config::nova_config: filter_scheduler/weight_classes: value: nova.scheduler.weights.all_weighers filter_scheduler/ram_weight_multiplier: value: 2.0", "(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/<compute_environment_file>.yaml", "openstack server create --image cedef40a-ed67-4d10-800e-17455edce175 --flavor 1 --hint different_host=a0cf03a5-d921-4877-bb5c-86d26cf818e1 --hint different_host=8c19174f-4220-44f0-824a-cd1eeef10287 server-1", "parameter_defaults: ComputeExtraConfig: nova::config::nova_config: filter_scheduler/isolated_hosts: value: server1, server2 filter_scheduler/isolated_images: value: 342b492c-128f-4a42-8d3a-c5088cf27d13, ebd267a6-ca86-4d6c-9a0e-bd132d6b7d09", "parameter_defaults: ComputeExtraConfig: nova::config::nova_config: DEFAULT/compute_monitors: value: 'cpu.virt_driver'", "parameter_defaults: ComputeExtraConfig: nova::config::nova_config: DEFAULT/update_resources_interval: value: '2'", "openstack server create --image cedef40a-ed67-4d10-800e-17455edce175 --flavor 1 --hint same_host=a0cf03a5-d921-4877-bb5c-86d26cf818e1 --hint same_host=8c19174f-4220-44f0-824a-cd1eeef10287 server-1", "openstack server group create --policy affinity <group_name>", "openstack server create --image <image> --flavor <flavor> --hint group=<group_uuid> <instance_name>", "openstack server group create --policy anti-affinity <group_name>", "openstack server create --image <image> --flavor <flavor> --hint group=<group_uuid> <instance_name>", "openstack server create --image <image> --flavor <flavor> --hint build_near_host_ip=<ip_address> --hint cidr=<subnet_mask> <instance_name>", "(node_resource_availability - minval) / (maxval - minval)", "(w1_multiplier * norm(w1)) + (w2_multiplier * norm(w2)) +", "openstack --os-compute-api-version 2.15 server group create --policy soft-affinity <group_name>", "openstack --os-compute-api-version 2.15 server group create --policy soft-affinity <group_name>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/configuring_the_compute_service_for_instance_creation/assembly_configuring-instance-scheduling-and-placement_scheduling-and-placement