title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
listlengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Deploying OpenShift Data Foundation on any platform
|
Deploying OpenShift Data Foundation on any platform Red Hat OpenShift Data Foundation 4.18 Instructions on deploying OpenShift Data Foundation on any platform including virtualized and cloud environments. Red Hat Storage Documentation Team Abstract Read this document for instructions about how to install Red Hat OpenShift Data Foundation to use local storage on any platform. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Jira ticket: Log in to the Jira . Click Create in the top navigation bar Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Select Documentation in the Components field. Click Create at the bottom of the dialogue. Preface Red Hat OpenShift Data Foundation supports deployment on any platform that you provision including bare metal, virtualized, and cloud environments. Both internal and external OpenShift Data Foundation clusters are supported on these environments. See Planning your deployment and Preparing to deploy OpenShift Data Foundation for more information about deployment requirements. To deploy OpenShift Data Foundation, follow the appropriate deployment process based on your requirement: Internal mode Deploy using local storage devices Deploy standalone Multicloud Object Gateway component External mode Chapter 1. Preparing to deploy OpenShift Data Foundation When you deploy OpenShift Data Foundation on OpenShift Container Platform using the local storage devices on any platform, you can create internal cluster resources. This approach internally provisions base services so that all the applications can access additional storage classes. You can also deploy OpenShift Data Foundation to use an external Red Hat Ceph Storage cluster and IBM FlashSystem. For instructions, see Deploying OpenShift Data Foundation in external mode . External mode deployment works on clusters that are detected as non-cloud. If your cluster is not detected correctly, open up a bug in Bugzilla . Before you begin the deployment of Red Hat OpenShift Data Foundation using a local storage, ensure that you meet the resource requirements. See Requirements for installing OpenShift Data Foundation using local storage devices . After completing the preparatory steps, perform the following procedures: Install the Local Storage Operator . Install the Red Hat OpenShift Data Foundation Operator . Create the OpenShift Data Foundation cluster on any platform . 1.1. Requirements for installing OpenShift Data Foundation using local storage devices Node requirements The cluster must consist of at least three OpenShift Container Platform worker or infrastructure nodes with locally attached-storage devices on each of them. Each of the three selected nodes must have at least one raw block device available. OpenShift Data Foundation uses the one or more available raw block devices. Note Make sure that the devices have a unique by-id device name for each available raw block device. The devices you use must be empty, the disks must not include Physical Volumes (PVs), Volume Groups (VGs), or Logical Volumes (LVs) remaining on the disk. For more information, see the Resource requirements section in the Planning guide . Disaster recovery requirements Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites to successfully implement a disaster recovery solution: A valid Red Hat OpenShift Data Foundation Advanced subscription. A valid Red Hat Advanced Cluster Management (RHACM) for Kubernetes subscription. To know in detail how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . For detailed disaster recovery solution requirements, see Configuring OpenShift Data Foundation Disaster Recovery for OpenShift Workloads guide, and Requirements and recommendations section of the Install guide in Red Hat Advanced Cluster Management for Kubernetes documentation. Minimum starting node requirements An OpenShift Data Foundation cluster is deployed with a minimum configuration when the resource requirement for a standard deployment is not met. For more information, see the Resource requirements section in the Planning guide . Chapter 2. Deploy OpenShift Data Foundation using local storage devices You can deploy OpenShift Data Foundation on any platform including virtualized and cloud environments where OpenShift Container Platform is already installed. Also, it is possible to deploy only the Multicloud Object Gateway (MCG) component with OpenShift Data Foundation. For more information, see Deploy standalone Multicloud Object Gateway . Perform the following steps to deploy OpenShift Data Foundation: Install the Local Storage Operator . Install the Red Hat OpenShift Data Foundation Operator . Create an OpenShift Data Foundation cluster on any platform . 2.1. Installing Local Storage Operator Install the Local Storage Operator from the Operator Hub before creating Red Hat OpenShift Data Foundation clusters on local storage devices. Procedure Log in to the OpenShift Web Console. Click Operators -> OperatorHub . Type local storage in the Filter by keyword box to find the Local Storage Operator from the list of operators, and click on it. Set the following options on the Install Operator page: Update channel as stable . Installation mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-local-storage . Update approval as Automatic . Click Install . Verification steps Verify that the Local Storage Operator shows a green tick indicating successful installation. 2.2. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators -> OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.18 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 2.3. Creating OpenShift Data Foundation cluster on any platform Prerequisites Ensure that all the requirements in the Requirements for installing OpenShift Data Foundation using local storage devices section are met. Ensure that the disk type is SSD, which is the only supported disk type. If you want to use multus networking, you must create network attachment definitions (NADs) before deployment which is later attached to the cluster. For more information, see Multi network plug-in (Multus) support and Creating network attachment definitions . Procedure In the OpenShift Web Console, click Operators -> Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click on the OpenShift Data Foundation operator, and then click Create StorageSystem . In the Backing storage page, perform the following: Select Full Deployment for the Deployment type option. Select the Create a new StorageClass using the local storage devices option. Optional: Select Use external PostgreSQL checkbox to use an external PostgreSQL [Technology preview] . This provides high availability solution for Multicloud Object Gateway where the PostgreSQL pod is a single point of failure. Provide the following connection details: Username Password Server name and Port Database name Select Enable TLS/SSL checkbox to enable encryption for the Postgres server. Click . Important You are prompted to install the Local Storage Operator if it is not already installed. Click Install , and follow the procedure as described in Installing Local Storage Operator . In the Create local volume set page, provide the following information: Enter a name for the LocalVolumeSet and the StorageClass . The local volume set name appears as the default value for the storage class name. You can change the name. Select one of the following: Disks on all nodes Uses the available disks that match the selected filters on all the nodes. Disks on selected nodes Uses the available disks that match the selected filters only on the selected nodes. Important The flexible scaling feature is enabled only when the storage cluster that you created with three or more nodes are spread across fewer than the minimum requirement of three availability zones. For information about flexible scaling, see knowledgebase article on Scaling OpenShift Data Foundation cluster using YAML when flexible scaling is enabled . Flexible scaling features get enabled at the time of deployment and can not be enabled or disabled later on. If the nodes selected do not match the OpenShift Data Foundation cluster requirement of an aggregated 30 CPUs and 72 GiB of RAM, a minimal cluster is deployed if at least 24 CPUs and 72 GiB of RAM is available. For minimum starting node requirements, see the Resource requirements section in the Planning guide. From the available list of Disk Type , select SSD/NVMe . Expand the Advanced section and set the following options: Volume Mode Block is selected as the default value. Device Type Select one or more device types from the dropdown list. Disk Size Set a minimum size of 100GB for the device and maximum available size of the device that needs to be included. Maximum Disks Limit This indicates the maximum number of Persistent Volumes (PVs) that you can create on a node. If this field is left empty, then PVs are created for all the available disks on the matching nodes. Click . A pop-up to confirm the creation of LocalVolumeSet is displayed. Click Yes to continue. In the Capacity and nodes page, configure the following: Available raw capacity is populated with the capacity value based on all the attached disks associated with the storage class. This takes some time to show up. The Selected nodes list shows the nodes based on the storage class. In the Configure performance section, select one of the following performance profiles: Lean Use this in a resource constrained environment with minimum resources that are lower than the recommended. This profile minimizes resource consumption by allocating fewer CPUs and less memory. Balanced (default) Use this when recommended resources are available. This profile provides a balance between resource consumption and performance for diverse workloads. Performance Use this in an environment with sufficient resources to get the best performance. This profile is tailored for high performance by allocating ample memory and CPUs to ensure optimal execution of demanding workloads. Note You have the option to configure the performance profile even after the deployment using the Configure performance option from the options menu of the StorageSystems tab. Important Before selecting a resource profile, make sure to check the current availability of resources within the cluster. Opting for a higher resource profile in a cluster with insufficient resources might lead to installation failures. For more information about resource requirements, see Resource requirement for performance profiles . Optional: Select the Taint nodes checkbox to dedicate the selected nodes for OpenShift Data Foundation. Click . Optional: In the Security and network page, configure the following based on your requirement: To enable encryption, select Enable data encryption for block and file storage . Select one or both of the following Encryption level : Cluster-wide encryption Encrypts the entire cluster (block and file). StorageClass encryption Creates encrypted persistent volume (block only) using encryption enabled storage class. Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, either select Vault or Thales CipherTrust Manager (using KMIP) . If you selected Vault , go to the step. If you selected Thales CipherTrust Manager (using KMIP) , go to step iii. Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. To use Thales CipherTrust Manager (using KMIP) as the KMS provider, follow the steps below: Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Select a Network . Select one of the following: Default (SDN) If you are using a single network. Custom (Multus) If you are using multiple network interfaces. Select a Public Network Interface from the dropdown. Select a Cluster Network Interface from the dropdown. Note If you are using only one additional network interface, select the single NetworkAttachementDefinition , that is, ocs-public-cluster for the Public Network Interface and leave the Cluster Network Interface blank. Click . In the Review and create page, review the configuration details. To modify any configuration settings, click Back to go back to the configuration page. Click Create StorageSystem . Note When your deployment has five or more nodes, racks, or rooms, and when there are five or more number of failure domains present in the deployment, you can configure Ceph monitor counts based on the number of racks or zones. An alert is displayed in the notification panel or Alert Center of the OpenShift Web Console to indicate the option to increase the number of Ceph monitor counts. You can use the Configure option in the alert to configure the Ceph monitor counts. For more information, see Resolving low Ceph monitor count alert . Verification steps To verify the final Status of the installed storage cluster: In the OpenShift Web Console, navigate to Installed Operators -> OpenShift Data Foundation -> Storage System Click ocs-storagecluster-storagesystem -> Resources . Verify that the Status of the StorageCluster is Ready and has a green tick mark to it. To verify if the flexible scaling is enabled on your storage cluster, perform the following steps (for arbiter mode, flexible scaling is disabled): In the OpenShift Web Console, navigate to Installed Operators -> OpenShift Data Foundation -> Storage System Click ocs-storagecluster-storagesystem -> Resources -> ocs-storagecluster . In the YAML tab, search for the keys flexibleScaling in the spec section and failureDomain in the status section. If flexible scaling is true and failureDomain is set to host, flexible scaling feature is enabled: To verify that all the components for OpenShift Data Foundation are successfully installed, see Verifying your OpenShift Data Foundation installation . To verify the multi networking (Multus), see Verifying the Multus networking . Additional resources To expand the capacity of the initial cluster, see the Scaling Storage guide and follow the instructions in the "Scaling storage of bare metal OpenShift Data Foundation cluster" section. 2.4. Verifying OpenShift Data Foundation deployment To verify that OpenShift Data Foundation is deployed correctly: Verify the state of the pods . Verify that the OpenShift Data Foundation cluster is healthy . Verify that the Multicloud Object Gateway is healthy . Verify that the OpenShift Data Foundation specific storage classes exist . Verify the Multus networking . 2.4.1. Verifying the state of the pods Procedure Click Workloads -> Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. For more information on the expected number of pods for each component and how it varies depending on the number of nodes, see the following table: Set filter for Running and Completed pods to verify that the following pods are in Running and Completed state: Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any storage node) ocs-metrics-exporter-* (1 pod on any storage node) odf-operator-controller-manager-* (1 pod on any storage node) odf-console-* (1 pod on any storage node) csi-addons-controller-manager-* (1 pod on any storage node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any storage node) Multicloud Object Gateway noobaa-operator-* (1 pod on any storage node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node) MON rook-ceph-mon-* (3 pods distributed across storage nodes) MGR rook-ceph-mgr-* (1 pod on any storage node) MDS rook-ceph-mds-ocs-storagecluster-cephfilesystem-* (2 pods distributed across storage nodes) RGW rook-ceph-rgw-ocs-storagecluster-cephobjectstore-* (1 pod on any storage node) CSI cephfs csi-cephfsplugin-* (1 pod on each storage node) csi-cephfsplugin-provisioner-* (2 pods distributed across storage nodes) rbd csi-rbdplugin-* (1 pod on each storage node) csi-rbdplugin-provisioner-* (2 pods distributed across storage nodes) rook-ceph-crashcollector rook-ceph-crashcollector-* (1 pod on each storage node) OSD rook-ceph-osd-* (1 pod for each device) 2.4.2. Verifying the OpenShift Data Foundation cluster is healthy Procedure In the OpenShift Web Console, click Storage -> Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Block and File tab, verify that the Storage Cluster has a green tick. In the Details card, verify that the cluster information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the Block and File dashboard, see Monitoring OpenShift Data Foundation . 2.4.3. Verifying the Multicloud Object Gateway is healthy Procedure In the OpenShift Web Console, click Storage -> Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the object service dashboard, see Monitoring OpenShift Data Foundation . Important The Multicloud Object Gateway only has a single copy of the database (NooBaa DB). This means if NooBaa DB PVC gets corrupted and we are unable to recover it, can result in total data loss of applicative data residing on the Multicloud Object Gateway. Because of this, Red Hat recommends taking a backup of NooBaa DB PVC regularly. If NooBaa DB fails and cannot be recovered, then you can revert to the latest backed-up version. For instructions on backing up your NooBaa DB, follow the steps in this knowledgabase article . 2.4.4. Verifying that the specific storage classes exist Procedure Click Storage -> Storage Classes from the left pane of the OpenShift Web Console. Verify that the following storage classes are created with the OpenShift Data Foundation cluster creation: ocs-storagecluster-ceph-rbd ocs-storagecluster-cephfs openshift-storage.noobaa.io ocs-storagecluster-ceph-rgw 2.4.5. Verifying the Multus networking To determine if Multus is working in your cluster, verify the Multus networking. Procedure Based on your Network configuration choices, the OpenShift Data Foundation operator will do one of the following: If only a single NetworkAttachmentDefinition (for example, ocs-public-cluster ) was selected for the Public Network Interface, then the traffic between the application pods and the OpenShift Data Foundation cluster will happen on this network. Additionally the cluster will be self configured to also use this network for the replication and rebalancing traffic between OSDs. If both NetworkAttachmentDefinitions (for example, ocs-public and ocs-cluster ) were selected for the Public Network Interface and the Cluster Network Interface respectively during the Storage Cluster installation, then client storage traffic will be on the public network and cluster network for the replication and rebalancing traffic between OSDs. To verify the network configuration is correct, complete the following: In the OpenShift console, navigate to Installed Operators -> OpenShift Data Foundation -> Storage System -> ocs-storagecluster-storagesystem -> Resources -> ocs-storagecluster . In the YAML tab, search for network in the spec section and ensure the configuration is correct for your network interface choices. This example is for separating the client storage traffic from the storage replication traffic. Sample output: To verify the network configuration is correct using the command line interface, run the following commands: Sample output: Confirm the OSD pods are using correct network In the openshift-storage namespace use one of the OSD pods to verify the pod has connectivity to the correct networks. This example is for separating the client storage traffic from the storage replication traffic. Note Only the OSD pods will connect to both Multus public and cluster networks if both are created. All other OCS pods will connect to the Multus public network. Sample output: To confirm the OSD pods are using correct network using the command line interface, run the following command (requires the jq utility): Sample output: Chapter 3. Deploy standalone Multicloud Object Gateway Deploying only the Multicloud Object Gateway (MCG) component with the OpenShift Data Foundation provides the flexibility in deployment and helps to reduce the resource consumption. After deploying the MCG component, you can create and manage buckets using MCG object browser. For more information, see Creating and managing buckets using MCG object browser . Use this section to deploy only the standalone Multicloud Object Gateway component, which involves the following steps: Installing the Local Storage Operator. Installing Red Hat OpenShift Data Foundation Operator Creating standalone Multicloud Object Gateway Important The Multicloud Object Gateway only has a single copy of the database (NooBaa DB). This means if NooBaa DB PVC gets corrupted and we are unable to recover it, can result in total data loss of applicative data residing on the Multicloud Object Gateway. Because of this, Red Hat recommends taking a backup of NooBaa DB PVC regularly. If NooBaa DB fails and cannot be recovered, then you can revert to the latest backed-up version. For instructions on backing up your NooBaa DB, follow the steps in this knowledgabase article . 3.1. Installing Local Storage Operator Install the Local Storage Operator from the Operator Hub before creating Red Hat OpenShift Data Foundation clusters on local storage devices. Procedure Log in to the OpenShift Web Console. Click Operators -> OperatorHub . Type local storage in the Filter by keyword box to find the Local Storage Operator from the list of operators, and click on it. Set the following options on the Install Operator page: Update channel as stable . Installation mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-local-storage . Update approval as Automatic . Click Install . Verification steps Verify that the Local Storage Operator shows a green tick indicating successful installation. 3.2. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators -> OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.18 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 3.3. Creating a standalone Multicloud Object Gateway You can create only the standalone Multicloud Object Gateway (MCG) component while deploying OpenShift Data Foundation. After you create the MCG component, you can create and manage buckets using the MCG object browser. For more information, see Creating and managing buckets using MCG object browser . Prerequisites Ensure that the OpenShift Data Foundation Operator is installed. Procedure In the OpenShift Web Console, click Operators -> Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click OpenShift Data Foundation operator and then click Create StorageSystem . In the Backing storage page, select the following: Select Multicloud Object Gateway for Deployment type . Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, either select Vault or Thales CipherTrust Manager (using KMIP) . If you selected Vault , go to the step. If you selected Thales CipherTrust Manager (using KMIP) , go to step iii. Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. To use Thales CipherTrust Manager (using KMIP) as the KMS provider, follow the steps below: Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Select a Network . Click . In the Review and create page, review the configuration details: To modify any configuration settings, click Back . Click Create StorageSystem . Verification steps Verifying that the OpenShift Data Foundation cluster is healthy In the OpenShift Web Console, click Storage -> Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. Verifying the state of the pods Click Workloads -> Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list and verify that the following pods are in Running state. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any storage node) ocs-metrics-exporter-* (1 pod on any storage node) odf-operator-controller-manager-* (1 pod on any storage node) odf-console-* (1 pod on any storage node) csi-addons-controller-manager-* (1 pod on any storage node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any storage node) Multicloud Object Gateway noobaa-operator-* (1 pod on any storage node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node) Chapter 4. View OpenShift Data Foundation Topology The topology shows the mapped visualization of the OpenShift Data Foundation storage cluster at various abstraction levels and also lets you to interact with these layers. The view also shows how the various elements compose the Storage cluster altogether. Procedure On the OpenShift Web Console, navigate to Storage -> Data Foundation -> Topology . The view shows the storage cluster and the zones inside it. You can see the nodes depicted by circular entities within the zones, which are indicated by dotted lines. The label of each item or resource contains basic information such as status and health or indication for alerts. Choose a node to view node details on the right-hand panel. You can also access resources or deployments within a node by clicking on the search/preview decorator icon. To view deployment details Click the preview decorator on a node. A modal window appears above the node that displays all of the deployments associated with that node along with their statuses. Click the Back to main view button in the model's upper left corner to close and return to the view. Select a specific deployment to see more information about it. All relevant data is shown in the side panel. Click the Resources tab to view the pods information. This tab provides a deeper understanding of the problems and offers granularity that aids in better troubleshooting. Click the pod links to view the pod information page on OpenShift Container Platform. The link opens in a new window. Chapter 5. Uninstalling OpenShift Data Foundation 5.1. Uninstalling OpenShift Data Foundation in Internal mode To uninstall OpenShift Data Foundation in Internal mode, refer to the knowledgebase article on Uninstalling OpenShift Data Foundation .
|
[
"oc annotate namespace openshift-storage openshift.io/node-selector=",
"spec: flexibleScaling: true [...] status: failureDomain: host",
"[..] spec: [..] network: ipFamily: IPv4 provider: multus selectors: cluster: openshift-storage/ocs-cluster public: openshift-storage/ocs-public [..]",
"oc get storagecluster ocs-storagecluster -n openshift-storage -o=jsonpath='{.spec.network}{\"\\n\"}'",
"{\"ipFamily\":\"IPv4\",\"provider\":\"multus\",\"selectors\":{\"cluster\":\"openshift-storage/ocs-cluster\",\"public\":\"openshift-storage/ocs-public\"}}",
"oc get -n openshift-storage USD(oc get pods -n openshift-storage -o name -l app=rook-ceph-osd | grep 'osd-0') -o=jsonpath='{.metadata.annotations.k8s\\.v1\\.cni\\.cncf\\.io/network-status}{\"\\n\"}'",
"[{ \"name\": \"openshift-sdn\", \"interface\": \"eth0\", \"ips\": [ \"10.129.2.30\" ], \"default\": true, \"dns\": {} },{ \"name\": \"openshift-storage/ocs-cluster\", \"interface\": \"net1\", \"ips\": [ \"192.168.2.1\" ], \"mac\": \"e2:04:c6:81:52:f1\", \"dns\": {} },{ \"name\": \"openshift-storage/ocs-public\", \"interface\": \"net2\", \"ips\": [ \"192.168.1.1\" ], \"mac\": \"ee:a0:b6:a4:07:94\", \"dns\": {} }]",
"oc get -n openshift-storage USD(oc get pods -n openshift-storage -o name -l app=rook-ceph-osd | grep 'osd-0') -o=jsonpath='{.metadata.annotations.k8s\\.v1\\.cni\\.cncf\\.io/network-status}{\"\\n\"}' | jq -r '.[].name'",
"openshift-sdn openshift-storage/ocs-cluster openshift-storage/ocs-public",
"oc annotate namespace openshift-storage openshift.io/node-selector="
] |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html-single/deploying_openshift_data_foundation_on_any_platform/index
|
12.4. Storing Certificates in NSS Databases
|
12.4. Storing Certificates in NSS Databases By default, certmonger uses .pem files to store the key and the certificate. To store the key and the certificate in an NSS database, specify the -d and -n with the command you use for requesting the certificate. -d sets the security database location -n gives the certificate nickname which is used for the certificate in the NSS database Note The -d and -n options are used instead of the -f and -k options that give the .pem file. For example: Requesting a certificate using ipa-getcert and local-getcert allows you to specify another two options: -F gives the file where the certificate of the CA is to be stored. -a gives the location of the NSS database where the certificate of the CA is to be stored. Note If you request a certificate using selfsign-getcert , there is no need to specify the -F and -a options because generating a self-signed certificate does not involve any CA. Supplying the -F option, the -a option, or both with local-getcert allows you to obtain a copy of the CA certificate that is required in order to verify a certificate issued by the local signer. For example:
|
[
"selfsign-getcert request -d /export/alias -n ServerCert",
"local-getcert request -F /etc/httpd/conf/ssl.crt/ca.crt -n ServerCert -f /etc/httpd/conf/ssl.crt/server.crt -k /etc/httpd/conf/ssl.key/server.key"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/system-level_authentication_guide/working_with_certmonger-using_certmonger_with_nss
|
Chapter 9. Multicloud Object Gateway
|
Chapter 9. Multicloud Object Gateway 9.1. About the Multicloud Object Gateway The Multicloud Object Gateway (MCG) is a lightweight object storage service for OpenShift, allowing users to start small and then scale as needed on-premise, in multiple clusters, and with cloud-native storage. 9.2. Accessing the Multicloud Object Gateway with your applications You can access the object service with any application targeting AWS S3 or code that uses AWS S3 Software Development Kit (SDK). Applications need to specify the MCG endpoint, an access key, and a secret access key. You can use your terminal or the MCG CLI to retrieve this information. Prerequisites A running OpenShift Container Storage Platform Download the MCG command-line interface for easier management: Note Specify the appropriate architecture for enabling the repositories using subscription manager. For instance, For IBM Power Systems, use the following command: For IBM Z infrastructure, use the following command: Alternatively, you can install the mcg package from the OpenShift Container Storage RPMs found at Download RedHat OpenShift Container Storage page . Note Choose the correct Product Variant according to your architecture. You can access the relevant endpoint, access key, and secret access key two ways: Section 9.2.1, "Accessing the Multicloud Object Gateway from the terminal" Section 9.2.2, "Accessing the Multicloud Object Gateway from the MCG command-line interface" Accessing the MCG bucket(s) using the virtual-hosted style Example 9.1. Example If the client application tries to access https://<bucket-name>.s3-openshift-storage.apps.mycluster-cluster.qe.rh-ocs.com where <bucket-name> is the name of the MCG bucket For example, https://mcg-test-bucket.s3-openshift-storage.apps.mycluster-cluster.qe.rh-ocs.com A DNS entry is needed for mcg-test-bucket.s3-openshift-storage.apps.mycluster-cluster.qe.rh-ocs.com to point to the S3 Service. Important Ensure that you have a DNS entry in order to point the client application to the MCG bucket(s) using the virtual-hosted style. 9.2.1. Accessing the Multicloud Object Gateway from the terminal Procedure Run the describe command to view information about the MCG endpoint, including its access key ( AWS_ACCESS_KEY_ID value) and secret access key ( AWS_SECRET_ACCESS_KEY value): The output will look similar to the following: 1 access key ( AWS_ACCESS_KEY_ID value) 2 secret access key ( AWS_SECRET_ACCESS_KEY value) 3 MCG endpoint 9.2.2. Accessing the Multicloud Object Gateway from the MCG command-line interface Prerequisites Download the MCG command-line interface: Note Specify the appropriate architecture for enabling the repositories using subscription manager. For instance, For IBM Power Systems, use the following command: For IBM Z infrastructure, use the following command: Procedure Run the status command to access the endpoint, access key, and secret access key: The output will look similar to the following: 1 endpoint 2 access key 3 secret access key You now have the relevant endpoint, access key, and secret access key in order to connect to your applications. Example 9.2. Example If AWS S3 CLI is the application, the following command will list buckets in OpenShift Container Storage: 9.3. Allowing user access to the Multicloud Object Gateway Console To allow access to the Multicloud Object Gateway Console to a user, ensure that the user meets the following conditions: User is in cluster-admins group. User is in system:cluster-admins virtual group. Prerequisites A running OpenShift Container Storage Platform. Procedure Enable access to the Multicloud Object Gateway console. Perform the following steps once on the cluster : Create a cluster-admins group. Bind the group to the cluster-admin role. Add or remove users from the cluster-admins group to control access to the Multicloud Object Gateway console. To add a set of users to the cluster-admins group : where <user-name> is the name of the user to be added. Note If you are adding a set of users to the cluster-admins group, you do not need to bind the newly added users to the cluster-admin role to allow access to the OpenShift Container Storage dashboard. To remove a set of users from the cluster-admins group : where <user-name> is the name of the user to be removed. Verification steps On the OpenShift Web Console, login as a user with access permission to Multicloud Object Gateway Console. Navigate to Storage Overview Object tab select the Multicloud Object Gateway link . On the Multicloud Object Gateway Console, login as the same user with access permission. Click Allow selected permissions . 9.4. Adding storage resources for hybrid or Multicloud 9.4.1. Creating a new backing store Use this procedure to create a new backing store in OpenShift Container Storage. Prerequisites Administrator access to OpenShift. Procedure Click Operators Installed Operators from the left pane of the OpenShift Web Console to view the installed operators. Click OpenShift Container Storage Operator. On the OpenShift Container Storage Operator page, scroll right and click the Backing Store tab. Click Create Backing Store . Figure 9.1. Create Backing Store page On the Create New Backing Store page, perform the following: Enter a Backing Store Name . Select a Provider . Select a Region . Enter an Endpoint . This is optional. Select a Secret from drop down list, or create your own secret. Optionally, you can Switch to Credentials view which lets you fill in the required secrets. For more information on creating an OCP secret, see the section Creating the secret in the Openshift Container Platform documentation. Each backingstore requires a different secret. For more information on creating the secret for a particular backingstore, see the Section 9.4.2, "Adding storage resources for hybrid or Multicloud using the MCG command line interface" and follow the procedure for the addition of storage resources using a YAML. Note This menu is relevant for all providers except Google Cloud and local PVC. Enter Target bucket . The target bucket is a container storage that is hosted on the remote cloud service. It allows you to create a connection that tells MCG that it can use this bucket for the system. Click Create Backing Store . Verification steps Click Operators Installed Operators . Click OpenShift Container Storage Operator. Search for the new backing store or click Backing Store tab to view all the backing stores. 9.4.2. Adding storage resources for hybrid or Multicloud using the MCG command line interface The Multicloud Object Gateway (MCG) simplifies the process of spanning data across cloud provider and clusters. You must add a backing storage that can be used by the MCG. Depending on the type of your deployment, you can choose one of the following procedures to create a backing storage: For creating an AWS-backed backingstore, see Section 9.4.2.1, "Creating an AWS-backed backingstore" For creating an IBM COS-backed backingstore, see Section 9.4.2.2, "Creating an IBM COS-backed backingstore" For creating an Azure-backed backingstore, see Section 9.4.2.3, "Creating an Azure-backed backingstore" For creating a GCP-backed backingstore, see Section 9.4.2.4, "Creating a GCP-backed backingstore" For creating a local Persistent Volume-backed backingstore, see Section 9.4.2.5, "Creating a local Persistent Volume-backed backingstore" For VMware deployments, skip to Section 9.4.3, "Creating an s3 compatible Multicloud Object Gateway backingstore" for further instructions. 9.4.2.1. Creating an AWS-backed backingstore Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface: Note Specify the appropriate architecture for enabling the repositories using subscription manager. For instance, in case of IBM Z infrastructure use the following command: Alternatively, you can install the mcg package from the OpenShift Container Storage RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/packages Note Choose the correct Product Variant according to your architecture. Procedure From the MCG command-line interface, run the following command: Replace <backingstore_name> with the name of the backingstore. Replace <AWS ACCESS KEY> and <AWS SECRET ACCESS KEY> with an AWS access key ID and secret access key you created for this purpose. Replace <bucket-name> with an existing AWS bucket name. This argument tells Multicloud Object Gateway which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. The output will be similar to the following: You can also add storage resources using a YAML: Create a secret with the credentials: You must supply and encode your own AWS access key ID and secret access key using Base64, and use the results in place of <AWS ACCESS KEY ID ENCODED IN BASE64> and <AWS SECRET ACCESS KEY ENCODED IN BASE64> . Replace <backingstore-secret-name> with a unique name. Apply the following YAML for a specific backing store: Replace <bucket-name> with an existing AWS bucket name. This argument tells Multicloud Object Gateway which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. Replace <backingstore-secret-name> with the name of the secret created in the step. 9.4.2.2. Creating an IBM COS-backed backingstore Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface: Note Specify the appropriate architecture for enabling the repositories using subscription manager. For instance, For IBM Power Systems, use the following command: For IBM Z infrastructure, use the following command: Alternatively, you can install the mcg package from the OpenShift Container Storage RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/packages Note Choose the correct Product Variant according to your architecture. Procedure From the MCG command-line interface, run the following command: Replace <backingstore_name> with the name of the backingstore. Replace <IBM ACCESS KEY> , <IBM SECRET ACCESS KEY> , <IBM COS ENDPOINT> with an IBM access key ID, secret access key and the appropriate regional endpoint that corresponds to the location of the existing IBM bucket. To generate the above keys on IBM cloud, you must include HMAC credentials while creating the service credentials for your target bucket. Replace <bucket-name> with an existing IBM bucket name. This argument tells Multicloud Object Gateway which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. The output will be similar to the following: You can also add storage resources using a YAML: Create a secret with the credentials: You must supply and encode your own IBM COS access key ID and secret access key using Base64, and use the results in place of <IBM COS ACCESS KEY ID ENCODED IN BASE64> and <IBM COS SECRET ACCESS KEY ENCODED IN BASE64> . Replace <backingstore-secret-name> with a unique name. Apply the following YAML for a specific backing store: Replace <bucket-name> with an existing IBM COS bucket name. This argument tells Multicloud Object Gateway which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. Replace <endpoint> with a regional endpoint that corresponds to the location of the existing IBM bucket name. This argument tells Multicloud Object Gateway which endpoint to use for its backing store, and subsequently, data storage and administration. Replace <backingstore-secret-name> with the name of the secret created in the step. 9.4.2.3. Creating an Azure-backed backingstore Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface: Note Specify the appropriate architecture for enabling the repositories using subscription manager. For instance, in case of IBM Z infrastructure use the following command: Alternatively, you can install the mcg package from the OpenShift Container Storage RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/packages Note Choose the correct Product Variant according to your architecture. Procedure From the MCG command-line interface, run the following command: Replace <backingstore_name> with the name of the backingstore. Replace <AZURE ACCOUNT KEY> and <AZURE ACCOUNT NAME> with an AZURE account key and account name you created for this purpose. Replace <blob container name> with an existing Azure blob container name. This argument tells Multicloud Object Gateway which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. The output will be similar to the following: You can also add storage resources using a YAML: Create a secret with the credentials: You must supply and encode your own Azure Account Name and Account Key using Base64, and use the results in place of <AZURE ACCOUNT NAME ENCODED IN BASE64> and <AZURE ACCOUNT KEY ENCODED IN BASE64> . Replace <backingstore-secret-name> with a unique name. Apply the following YAML for a specific backing store: Replace <blob-container-name> with an existing Azure blob container name. This argument tells Multicloud Object Gateway which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. Replace <backingstore-secret-name> with the name of the secret created in the step. 9.4.2.4. Creating a GCP-backed backingstore Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface: Note Specify the appropriate architecture for enabling the repositories using subscription manager. For instance, in case of IBM Z infrastructure use the following command: Alternatively, you can install the mcg package from the OpenShift Container Storage RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/packages Note Choose the correct Product Variant according to your architecture. Procedure From the MCG command-line interface, run the following command: Replace <backingstore_name> with the name of the backingstore. Replace <PATH TO GCP PRIVATE KEY JSON FILE> with a path to your GCP private key created for this purpose. Replace <GCP bucket name> with an existing GCP object storage bucket name. This argument tells Multicloud Object Gateway which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. The output will be similar to the following: You can also add storage resources using a YAML: Create a secret with the credentials: You must supply and encode your own GCP service account private key using Base64, and use the results in place of <GCP PRIVATE KEY ENCODED IN BASE64> . Replace <backingstore-secret-name> with a unique name. Apply the following YAML for a specific backing store: Replace <target bucket> with an existing Google storage bucket. This argument tells Multicloud Object Gateway which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. Replace <backingstore-secret-name> with the name of the secret created in the step. 9.4.2.5. Creating a local Persistent Volume-backed backingstore Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface: Note Specify the appropriate architecture for enabling the repositories using subscription manager. For instance, in case of IBM Z infrastructure use the following command: Alternatively, you can install the mcg package from the OpenShift Container Storage RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/packages Note Choose the correct Product Variant according to your architecture. Procedure From the MCG command-line interface, run the following command: Replace <backingstore_name> with the name of the backingstore. Replace <NUMBER OF VOLUMES> with the number of volumes you would like to create. Note that increasing the number of volumes scales up the storage. Replace <VOLUME SIZE> with the required size, in GB, of each volume Replace <LOCAL STORAGE CLASS> with the local storage class, recommended to use ocs-storagecluster-ceph-rbd The output will be similar to the following: You can also add storage resources using a YAML: Apply the following YAML for a specific backing store: Replace <backingstore_name> with the name of the backingstore. Replace <NUMBER OF VOLUMES> with the number of volumes you would like to create. Note that increasing the number of volumes scales up the storage. Replace <VOLUME SIZE> with the required size, in GB, of each volume. Note that the letter G should remain Replace <LOCAL STORAGE CLASS> with the local storage class, recommended to use ocs-storagecluster-ceph-rbd 9.4.3. Creating an s3 compatible Multicloud Object Gateway backingstore The Multicloud Object Gateway can use any S3 compatible object storage as a backing store, for example, Red Hat Ceph Storage's RADOS Gateway (RGW). The following procedure shows how to create an S3 compatible Multicloud Object Gateway backing store for Red Hat Ceph Storage's RADOS Gateway. Note that when RGW is deployed, Openshift Container Storage operator creates an S3 compatible backingstore for Multicloud Object Gateway automatically. Procedure From the Multicloud Object Gateway (MCG) command-line interface, run the following NooBaa command: To get the <RGW ACCESS KEY> and <RGW SECRET KEY> , run the following command using your RGW user secret name: Decode the access key ID and the access key from Base64 and keep them. Replace <RGW USER ACCESS KEY> and <RGW USER SECRET ACCESS KEY> with the appropriate, decoded data from the step. Replace <bucket-name> with an existing RGW bucket name. This argument tells Multicloud Object Gateway which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. To get the <RGW endpoint> , see Accessing the RADOS Object Gateway S3 endpoint . The output will be similar to the following: You can also create the backingstore using a YAML: Create a CephObjectStore user. This also creates a secret containing the RGW credentials: Replace <RGW-Username> and <Display-name> with a unique username and display name. Apply the following YAML for an S3-Compatible backing store: Replace <backingstore-secret-name> with the name of the secret that was created with CephObjectStore in the step. Replace <bucket-name> with an existing RGW bucket name. This argument tells Multicloud Object Gateway which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. To get the <RGW endpoint> , see Accessing the RADOS Object Gateway S3 endpoint . 9.4.4. Adding storage resources for hybrid and Multicloud using the user interface Procedure In your OpenShift Storage console, click Storage Overview Object tab Multicloud Object Gateway link. Select the Resources tab in the left, highlighted below. From the list that populates, select Add Cloud Resource . Select Add new connection . Select the relevant native cloud provider or S3 compatible option and fill in the details. Select the newly created connection and map it to the existing bucket. Repeat these steps to create as many backing stores as needed. Note Resources created in NooBaa UI cannot be used by OpenShift UI or MCG CLI. 9.4.5. Creating a new bucket class Bucket class is a CRD representing a class of buckets that defines tiering policies and data placements for an Object Bucket Class (OBC). Use this procedure to create a bucket class in OpenShift Container Storage. Procedure Click Operators Installed Operators from the left pane of the OpenShift Web Console to view the installed operators. Click OpenShift Container Storage Operator. On the OpenShift Container Storage Operator page, scroll right and click the Bucket Class tab. Click Create Bucket Class . On the Create new Bucket Class page, perform the following: Select the bucket class type and enter a bucket class name. Select the BucketClass type . Choose one of the following options: Namespace Data is stored on the NamespaceStores without performing de-duplication, compression or encryption. Standard Data will be consumed by a Multicloud Object Gateway (MCG), deduped, compressed and encrypted. By default, Standard is selected. Enter a Bucket Class Name . Click . In Placement Policy, select Tier 1 - Policy Type and click . You can choose either one of the options as per your requirements. Spread allows spreading of the data across the chosen resources. Mirror allows full duplication of the data across the chosen resources. Click Add Tier to add another policy tier. Select atleast one Backing Store resource from the available list if you have selected Tier 1 - Policy Type as Spread and click . Alternatively, you can also create a new backing store . Note You need to select atleast 2 backing stores when you select Policy Type as Mirror in step. Review and confirm Bucket Class settings. Click Create Bucket Class . Verification steps Click Operators Installed Operators . Click OpenShift Container Storage Operator. Search for the new Bucket Class or click Bucket Class tab to view all the Bucket Classes. 9.4.6. Editing a bucket class Use the following procedure to edit the bucket class components through the YAML file by clicking the edit button on the Openshift web console. Prerequisites Administrator access to OpenShift. Procedure Log into the OpenShift Web Console . Click Operators Installed Operators . Click OpenShift Container Storage Operator . On the OpenShift Container Storage Operator page, scroll right and click the Bucket Class tab. Click on the action menu (...) to the Bucket class you want to edit. Click Edit Bucket Class . You are redirected to the YAML file, make the required changes in this file and click Save . 9.4.7. Editing backing stores for bucket class Use the following procedure to edit an existing Multicloud Object Gateway bucket class to change the underlying backing stores used in a bucket class. Prerequisites Administrator access to OpenShift Web Console. A bucket class. Backing stores. Procedure Click Operators Installed Operators to view the installed operators. Click OpenShift Container Storage Operator . Click the Bucket Class tab. Click on the action menu (...) to the Bucket class you want to edit. Click Edit Bucket Class Resources . On the Edit Bucket Class Resources page, edit the bucket class resources either by adding a backing store to the bucket class or by removing a backing store from the bucket class. You can also edit bucket class resources created with one or two tiers and different placement policies. To add a backing store to the bucket class, select the name of the backing store. To remove a backing store from the bucket class, clear the name of the backing store. Click Save . 9.5. Managing namespace buckets Namespace buckets let you connect data repositories on different providers together, so you can interact with all of your data through a single unified view. Add the object bucket associated with each provider to the namespace bucket, and access your data through the namespace bucket to see all of your object buckets at once. This lets you write to your preferred storage provider while reading from multiple other storage providers, greatly reducing the cost of migrating to a new storage provider. Connect your providers to the Multicloud Object Gateway . Create a namespace resource for each of your providers so they can be added to a namespace bucket. Add your namespace resources to a namespace bucket and configure the bucket to read from and write to the appropriate namespace resources. You can interact with objects in a namespace bucket using the S3 API. See S3 API endpoints for objects in namespace buckets for more information. Note A namespace bucket can only be used if its write target is available and functional. 9.5.1. Adding provider connections to the Multicloud Object Gateway You need to add connections for each of your providers so that the Multicloud Object Gateway has access to the provider. Prerequisites Administrative access to the OpenShift Console. Procedure In the OpenShift Console, click Storage Overview and click the Object tab. Click Multicloud Object Gateway and log in if prompted. Click Accounts and select an account to add the connection to. Click My Connections . Click Add Connection . Enter a Connection Name . Your cloud provider is shown in the Service dropdown by default. Change the selection to use a different provider. Your cloud provider's default endpoint is shown in the Endpoint field by default. Enter an alternative endpoint if required. Enter your Access Key for this cloud provider. Enter your Secret Key for this cloud provider. Click Save . 9.5.2. Adding namespace resources using the Multicloud Object Gateway Add existing storage to Multicloud Storage Gateway as namespace resources so that they can be included in namespace buckets for a unified view of existing storage targets, such as Amazon Web Services S3 buckets, Microsoft Azure blobs, and IBM Cloud Object Storage buckets. Prerequisites Administrative access to the OpenShift Console. Target connections (providers) are already added to the Multicloud Object Gateway. See Section 9.5.1, "Adding provider connections to the Multicloud Object Gateway" for details. Procedure In the OpenShift Console, click Storage Overview and click on the Object tab. Click Multicloud Storage Gateway and log in if prompted. Click Resources , and click the Namespace Resources tab. Click Create Namespace Resource . In Target Connection , select the connection to be used for this namespace's storage provider. If you need to add a new connection, click Add New Connection and enter your provider details; see Section 9.5.1, "Adding provider connections to the Multicloud Object Gateway" for more information. In Target Bucket , select the name of the bucket to use as a target. Enter a Resource Name for your namespace resource. Click Create . Verification Verify that the new resource is listed with a green check mark in the State column, and 0 buckets in the Connected Namespace Buckets column. 9.5.3. Adding resources to namespace buckets using the Multicloud Object Gateway Add namespace resources to namespace buckets for a unified view of your storage across various providers. You can also configure read and write behaviour so that only one provider accepts new data, while all providers allow existing data to be read. Prerequisites Ensure that all namespace resources you want to handle in a bucket have been added to the Multicloud Object Gateway: Adding namespace resources using the Multicloud Object Gateway . Procedure In the OpenShift Console, click Storage Overview and click the Object tab. Click Multicloud Object Gateway and log in if prompted. Click Buckets , and click on the Namespace Buckets tab. Click Create Namespace Bucket . On the Choose Name tab, specify a Name for the namespace bucket and click . On the Set Placement tab: Under Read Policy , select the checkbox for each namespace resource that the namespace bucket should read data from. Under Write Policy , specify which namespace resource the namespace bucket should write data to. Click . Do not make changes on the Set Caching Policy tab in a production environment. This tab is provided as a Development Preview and is subject to support limitations. Click Create . Verification Verify that the namespace bucket is listed with a green check mark in the State column, the expected number of read resources, and the expected write resource name. 9.5.4. Amazon S3 API endpoints for objects in namespace buckets You can interact with objects in namespace buckets using the Amazon Simple Storage Service (S3) API. Red Hat OpenShift Container Storage 4.6 onwards supports the following namespace bucket operations: ListObjectVersions ListObjects PutObject CopyObject ListParts CreateMultipartUpload CompleteMultipartUpload UploadPart UploadPartCopy AbortMultipartUpload GetObjectAcl GetObject HeadObject DeleteObject DeleteObjects See the Amazon S3 API reference documentation for the most up-to-date information about these operations and how to use them. Additional resources Amazon S3 REST API Reference Amazon S3 CLI Reference 9.5.5. Adding a namespace bucket using the Multicloud Object Gateway CLI and YAML For more information about namespace buckets, see Managing namespace buckets . Depending on the type of your deployment and whether you want to use YAML or the Multicloud Object Gateway CLI, choose one of the following procedures to add a namespace bucket: Adding an AWS S3 namespace bucket using YAML Adding an IBM COS namespace bucket using YAML Adding an AWS S3 namespace bucket using the Multicloud Object Gateway CLI Adding an IBM COS namespace bucket using the Multicloud Object Gateway CLI 9.5.5.1. Adding an AWS S3 namespace bucket using YAML Prerequisites A running OpenShift Container Storage Platform Access to the Multicloud Object Gateway, see Chapter 2, Accessing the Multicloud Object Gateway with your applications Procedure Create a secret with the credentials: You must supply and encode your own AWS access key ID and secret access key using Base64, and use the results in place of <AWS ACCESS KEY ID ENCODED IN BASE64> and <AWS SECRET ACCESS KEY ENCODED IN BASE64> . ii. Replace <namespacestore-secret-name> with a unique name. Create a NamespaceStore resource using OpenShift Custom Resource Definitions (CRDs). A NamespaceStore represents underlying storage to be used as a read or write target for the data in the Multicloud Object Gateway namespace buckets. To create a NamespaceStore resource, apply the following YAML: Replace <resource-name> with the name you want to give to the resource. Replace <namespacestore-secret-name> with the secret created in step 1. Replace <namespace-secret> with the namespace where the secret can be found. Replace <target-bucket> with the target bucket you created for the NamespaceStore. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy requires a type of either single or multi . A namespace policy of type single requires the following configuration: Replace <my-bucket-class> with a unique namespace bucket class name. Replace <resource> with the name of a single namespace-store that will define the read and write target of the namespace bucket. A namespace policy of type multi requires the following configuration: Replace <my-bucket-class> with a unique bucket class name. Replace <write-resource> with the name of a single namespace-store that will define the write target of the namespace bucket. Replace <read-resources> with a list of the names of the namespace-stores that will define the read targets of the namespace bucket. Apply the following YAML to create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in step 2. Note For IBM Power Systems and IBM Z infrastructure use storageClassName as openshift-storage.noobaa.io Replace <my-bucket-class> with the bucket class created in the step. Once the OBC is provisioned by the operator, a bucket is created in the Multicloud Object Gateway, and the operator creates a Secret and ConfigMap with the same name of the OBC on the same namespace of the OBC. 9.5.5.2. Adding an IBM COS namespace bucket using YAML Prerequisites A running OpenShift Container Storage Platform Access to the Multicloud Object Gateway, see Chapter 2, Accessing the Multicloud Object Gateway with your applications Procedure Create a secret with the credentials: You must supply and encode your own IBM COS access key ID and secret access key using Base64, and use the results in place of <IBM COS ACCESS KEY ID ENCODED IN BASE64> and `<IBM COS SECRET ACCESS KEY ENCODED IN BASE64>. Replace <namespacestore-secret-name> with a unique name. Create a NamespaceStore resource using OpenShift Custom Resource Definitions (CRDs). A NamespaceStore represents underlying storage to be used as a read or write target for the data in the Multicloud Object Gateway namespace buckets. To create a NamespaceStore resource, apply the following YAML: Replace <IBM COS ENDPOINT> with the appropriate IBM COS endpoint. Replace <namespacestore-secret-name> with the secret created in step 1. Replace <namespace-secret> with the namespace where the secret can be found. Replace <target-bucket> with the target bucket you created for the NamespaceStore. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy requires a type of either single or multi . A namespace policy of type single requires the following configuration: Replace <my-bucket-class> with a unique namespace bucket class name. Replace <resource> with a the name of a single namespace-store that will define the read and write target of the namespace bucket. A namespace policy of type multi requires the following configuration: Replace <my-bucket-class> with a unique bucket class name. Replace <write-resource> with the name of a single namespace-store that will define the write target of the namespace bucket. Replace <read-resources> with a list of the names of namespace-stores that will define the read targets of the namespace bucket. Apply the following YAML to create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in step 2. Note For IBM Power Systems and IBM Z infrastructure use storageClassName as openshift-storage.noobaa.io Replace <my-bucket-class> with the bucket class created in the step. Once the OBC is provisioned by the operator, a bucket is created in the Multicloud Object Gateway, and the operator creates a Secret and ConfigMap with the same name of the OBC on the same namespace of the OBC. 9.5.5.3. Adding an AWS S3 namespace bucket using the Multicloud Object Gateway CLI Prerequisites A running OpenShift Container Storage Platform Access to the Multicloud Object Gateway, see Chapter 2, Accessing the Multicloud Object Gateway with your applications Download the Multicloud Object Gateway command-line interface: Note Specify the appropriate architecture for enabling the repositories using subscription manager. For instance, in case of IBM Z infrastructure use the following command: Alternatively, you can install the mcg package from the OpenShift Container Storage RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/package . Note Choose the correct Product Variant according to your architecture. Procedure Create a NamespaceStore resource. A NamespaceStore represents an underlying storage to be used as a read or write target for the data in Multicloud Object Gateway namespace buckets. From the MCG command-line interface, run the following command: Replace <namespacestore> with the name of the NamespaceStore. Replace <AWS ACCESS KEY> and <AWS SECRET ACCESS KEY> with an AWS access key ID and secret access key you created for this purpose. Replace <bucket-name> with an existing AWS bucket name. This argument tells Multicloud Object Gateway which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy requires a type of either single or multi . Run the following command to create a namespace bucket class with a namespace policy of type single : Replace <resource-name> with the name you want to give the resource. Replace <my-bucket-class> with a unique bucket class name. Replace <resource> with a single namespace-store that will define the read and write target of the namespace bucket. Run the following command to create a namespace bucket class with a namespace policy of type multi : Replace <resource-name> with the name you want to give the resource. Replace <my-bucket-class> with a unique bucket class name. Replace <write-resource> with a single namespace-store that will define the write target of the namespace bucket. Replace <read-resources> with a list of namespace-stores separated by commas that will define the read targets of the namespace bucket. Run the following command to create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in step 2. Replace <bucket-name> with a bucket name of your choice. Replace <custom-bucket-class> with the name of the bucket class created in step 2. Once the OBC is provisioned by the operator, a bucket is created in the Multicloud Object Gateway, and the operator creates a Secret and ConfigMap with the same name of the OBC on the same namespace of the OBC. 9.5.5.4. Adding an IBM COS namespace bucket using the Multicloud Object Gateway CLI Prerequisites A running OpenShift Container Storage Platform Access to the Multicloud Object Gateway, see Chapter 2, Accessing the Multicloud Object Gateway with your applications Download the Multicloud Object Gateway command-line interface: Note Specify the appropriate architecture for enabling the repositories using subscription manager. For instance, For IBM Power Systems, use the following command: For IBM Z infrastructure, use the following command: Alternatively, you can install the mcg package from the OpenShift Container Storage RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/package . Note Choose the correct Product Variant according to your architecture. Procedure Create a NamespaceStore resource. A NamespaceStore represents an underlying storage to be used as a read or write target for the data in Multicloud Object Gateway namespace buckets. From the MCG command-line interface, run the following command: Replace <namespacestore> with the name of the NamespaceStore. Replace <IBM ACCESS KEY> , <IBM SECRET ACCESS KEY> , <IBM COS ENDPOINT> with an IBM access key ID, secret access key and the appropriate regional endpoint that corresponds to the location of the existing IBM bucket. Replace <bucket-name> with an existing IBM bucket name. This argument tells Multicloud Object Gateway which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy requires a type of either single or multi . Run the following command to create a namespace bucket class with a namespace policy of type single : Replace <resource-name> with the name you want to give the resource. Replace <my-bucket-class> with a unique bucket class name. Replace <resource> with a single namespace-store that will define the read and write target of the namespace bucket. Run the following command to create a namespace bucket class with a namespace policy of type multi : Replace <resource-name> with the name you want to give the resource. Replace <my-bucket-class> with a unique bucket class name. Replace <write-resource> with a single namespace-store that will define the write target of the namespace bucket. Replace <read-resources> with a list of namespace-stores separated by commas that will define the read targets of the namespace bucket. Run the following command to create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in step 2. Replace <bucket-name> with a bucket name of your choice. Replace <custom-bucket-class> with the name of the bucket class created in step 2. Once the OBC is provisioned by the operator, a bucket is created in the Multicloud Object Gateway, and the operator creates a Secret and ConfigMap with the same name of the OBC on the same namespace of the OBC. 9.6. Mirroring data for hybrid and Multicloud buckets The Multicloud Object Gateway (MCG) simplifies the process of spanning data across cloud provider and clusters. Prerequisites You must first add a backing storage that can be used by the MCG, see Section 9.4, "Adding storage resources for hybrid or Multicloud" . Then you create a bucket class that reflects the data management policy, mirroring. Procedure You can set up mirroring data three ways: Section 9.6.1, "Creating bucket classes to mirror data using the MCG command-line-interface" Section 9.6.2, "Creating bucket classes to mirror data using a YAML" Section 9.6.3, "Configuring buckets to mirror data using the user interface" 9.6.1. Creating bucket classes to mirror data using the MCG command-line-interface From the MCG command-line interface, run the following command to create a bucket class with a mirroring policy: Set the newly created bucket class to a new bucket claim, generating a new bucket that will be mirrored between two locations: 9.6.2. Creating bucket classes to mirror data using a YAML Apply the following YAML. Add the following lines to your standard Object Bucket Claim (OBC): For more information about OBCs, see Section 9.8, "Object Bucket Claim" . 9.6.3. Configuring buckets to mirror data using the user interface In your OpenShift Storage console, Click Storage Overview Object tab Multicloud Object Gateway link. On the NooBaa page, click the buckets icon on the left side. You will see a list of your buckets: Click the bucket you want to update. Click Edit Tier 1 Resources : Select Mirror and check the relevant resources you want to use for this bucket. In the following example, the data between noobaa-default-backing-store which is on RGW and AWS-backingstore which is on AWS is mirrored: Click Save . Note Resources created in NooBaa UI cannot be used by OpenShift UI or MCG CLI. 9.7. Bucket policies in the Multicloud Object Gateway OpenShift Container Storage supports AWS S3 bucket policies. Bucket policies allow you to grant users access permissions for buckets and the objects in them. 9.7.1. About bucket policies Bucket policies are an access policy option available for you to grant permission to your AWS S3 buckets and objects. Bucket policies use JSON-based access policy language. For more information about access policy language, see AWS Access Policy Language Overview . 9.7.2. Using bucket policies Prerequisites A running OpenShift Container Storage Platform Access to the Multicloud Object Gateway, see Section 9.2, "Accessing the Multicloud Object Gateway with your applications" Procedure To use bucket policies in the Multicloud Object Gateway: Create the bucket policy in JSON format. See the following example: There are many available elements for bucket policies with regard to access permissions. For details on these elements and examples of how they can be used to control the access permissions, see AWS Access Policy Language Overview . For more examples of bucket policies, see AWS Bucket Policy Examples . Instructions for creating S3 users can be found in Section 9.7.3, "Creating an AWS S3 user in the Multicloud Object Gateway" . Using AWS S3 client, use the put-bucket-policy command to apply the bucket policy to your S3 bucket: Replace ENDPOINT with the S3 endpoint Replace MyBucket with the bucket to set the policy on Replace BucketPolicy with the bucket policy JSON file Add --no-verify-ssl if you are using the default self signed certificates For example: For more information on the put-bucket-policy command, see the AWS CLI Command Reference for put-bucket-policy . Note The principal element specifies the user that is allowed or denied access to a resource, such as a bucket. Currently, Only NooBaa accounts can be used as principals. In the case of object bucket claims, NooBaa automatically create an account obc-account.<generated bucket name>@noobaa.io . Note Bucket policy conditions are not supported. 9.7.3. Creating an AWS S3 user in the Multicloud Object Gateway Prerequisites A running OpenShift Container Storage Platform Access to the Multicloud Object Gateway, see Section 9.2, "Accessing the Multicloud Object Gateway with your applications" Procedure In your OpenShift Storage console, navigate to Storage Overview Object tab select the Multicloud Object Gateway link: Under the Accounts tab, click Create Account : Select S3 Access Only , provide the Account Name , for example, [email protected] . Click : Select S3 default placement , for example, noobaa-default-backing-store. Select Buckets Permissions . A specific bucket or all buckets can be selected. Click Create : 9.8. Object Bucket Claim An Object Bucket Claim can be used to request an S3 compatible bucket backend for your workloads. You can create an Object Bucket Claim three ways: Section 9.8.1, "Dynamic Object Bucket Claim" Section 9.8.2, "Creating an Object Bucket Claim using the command line interface" Section 9.8.3, "Creating an Object Bucket Claim using the OpenShift Web Console" An object bucket claim creates a new bucket and an application account in NooBaa with permissions to the bucket, including a new access key and secret access key. The application account is allowed to access only a single bucket and can't create new buckets by default. 9.8.1. Dynamic Object Bucket Claim Similar to Persistent Volumes, you can add the details of the Object Bucket claim to your application's YAML, and get the object service endpoint, access key, and secret access key available in a configuration map and secret. It is easy to read this information dynamically into environment variables of your application. Procedure Add the following lines to your application YAML: These lines are the Object Bucket Claim itself. Replace <obc-name> with the a unique Object Bucket Claim name. Replace <obc-bucket-name> with a unique bucket name for your Object Bucket Claim. You can add more lines to the YAML file to automate the use of the Object Bucket Claim. The example below is the mapping between the bucket claim result, which is a configuration map with data and a secret with the credentials. This specific job will claim the Object Bucket from NooBaa, which will create a bucket and an account. Replace all instances of <obc-name> with your Object Bucket Claim name. Replace <your application image> with your application image. Apply the updated YAML file: Replace <yaml.file> with the name of your YAML file. To view the new configuration map, run the following: Replace obc-name with the name of your Object Bucket Claim. You can expect the following environment variables in the output: BUCKET_HOST - Endpoint to use in the application BUCKET_PORT - The port available for the application The port is related to the BUCKET_HOST . For example, if the BUCKET_HOST is https://my.example.com , and the BUCKET_PORT is 443, the endpoint for the object service would be https://my.example.com:443 . BUCKET_NAME - Requested or generated bucket name AWS_ACCESS_KEY_ID - Access key that is part of the credentials AWS_SECRET_ACCESS_KEY - Secret access key that is part of the credentials Important Retrieve the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY . The names are used so that it is compatible with the AWS S3 API. You need to specify the keys while performing S3 operations, especially when you read, write or list from the Multicloud Object Gateway (MCG) bucket. The keys are encoded in Base64. Decode the keys before using them. <obc_name> Specify the name of the object bucket claim. 9.8.2. Creating an Object Bucket Claim using the command line interface When creating an Object Bucket Claim using the command-line interface, you get a configuration map and a Secret that together contain all the information your application needs to use the object storage service. Prerequisites Download the MCG command-line interface: Note Specify the appropriate architecture for enabling the repositories using subscription manager. For instance, For IBM Power Systems, use the following command: For IBM Z infrastructure, use the following command: Procedure Use the command-line interface to generate the details of a new bucket and credentials. Run the following command: Replace <obc-name> with a unique Object Bucket Claim name, for example, myappobc . Additionally, you can use the --app-namespace option to specify the namespace where the Object Bucket Claim configuration map and secret will be created, for example, myapp-namespace . Example output: The MCG command-line-interface has created the necessary configuration and has informed OpenShift about the new OBC. Run the following command to view the Object Bucket Claim: Example output: Run the following command to view the YAML file for the new Object Bucket Claim: Example output: Inside of your openshift-storage namespace, you can find the configuration map and the secret to use this Object Bucket Claim. The CM and the secret have the same name as the Object Bucket Claim. To view the secret: Example output: The secret gives you the S3 access credentials. To view the configuration map: Example output: The configuration map contains the S3 endpoint information for your application. 9.8.3. Creating an Object Bucket Claim using the OpenShift Web Console You can create an Object Bucket Claim (OBC) using the OpenShift Web Console. Prerequisites Administrative access to the OpenShift Web Console. In order for your applications to communicate with the OBC, you need to use the configmap and secret. For more information about this, see Section 9.8.1, "Dynamic Object Bucket Claim" . Procedure Log into the OpenShift Web Console. On the left navigation bar, click Storage Object Bucket Claims . Click Create Object Bucket Claim : Enter a name for your object bucket claim and select the appropriate storage class based on your deployment, internal or external, from the dropdown menu: Internal mode The following storage classes, which were created after deployment, are available for use: ocs-storagecluster-ceph-rgw uses the Ceph Object Gateway (RGW) openshift-storage.noobaa.io uses the Multicloud Object Gateway External mode The following storage classes, which were created after deployment, are available for use: ocs-external-storagecluster-ceph-rgw uses the Ceph Object Gateway (RGW) openshift-storage.noobaa.io uses the Multicloud Object Gateway Note The RGW OBC storage class is only available with fresh installations of OpenShift Container Storage version 4.5. It does not apply to clusters upgraded from OpenShift Container Storage releases. Click Create . Once you create the OBC, you are redirected to its detail page: Additional Resources Section 9.8, "Object Bucket Claim" 9.9. Caching policy for object buckets A cache bucket is a namespace bucket with a hub target and a cache target. The hub target is an S3 compatible large object storage bucket. The cache bucket is the local Multicloud Object Gateway bucket. You can create a cache bucket that caches an AWS bucket or an IBM COS bucket. Important Cache buckets are a Technology Preview feature. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information, see Technology Preview Features Support Scope . AWS S3 IBM COS 9.9.1. Creating an AWS cache bucket Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface: Note Specify the appropriate architecture for enabling the repositories using subscription manager. For instance, in case of IBM Z infrastructure use the following command: Alternatively, you can install the mcg package from the OpenShift Container Storage RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/package . Note Choose the correct Product Variant according to your architecture. Procedure Create a NamespaceStore resource. A NamespaceStore represents an underlying storage to be used as a read or write target for the data in Multicloud Object Gateway namespace buckets. From the MCG command-line interface, run the following command: Replace <namespacestore> with the name of the namespacestore. Replace <AWS ACCESS KEY> and <AWS SECRET ACCESS KEY> with an AWS access key ID and secret access key you created for this purpose. Replace <bucket-name> with an existing AWS bucket name. This argument tells Multicloud Object Gateway which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. You can also add storage resources by applying a YAML. First create a secret with credentials: You must supply and encode your own AWS access key ID and secret access key using Base64, and use the results in place of <AWS ACCESS KEY ID ENCODED IN BASE64> and <AWS SECRET ACCESS KEY ENCODED IN BASE64> . Replace <namespacestore-secret-name> with a unique name. Then apply the following YAML: Replace <namespacestore> with a unique name. Replace <namespacestore-secret-name> with the secret created in the step. Replace <namespace-secret> with the namespace used to create the secret in the step. Replace <target-bucket> with the AWS S3 bucket you created for the namespacestore. Run the following command to create a bucket class: Replace <my-cache-bucket-class> with a unique bucket class name. Replace <backing-store> with the relevant backing store. You can list one or more backingstores separated by commas in this field. Replace <namespacestore> with the namespacestore created in the step. Run the following command to create a bucket using an Object Bucket Claim resource that uses the bucket class defined in step 2. Replace <my-bucket-claim> with a unique name. Replace <custom-bucket-class> with the name of the bucket class created in step 2. 9.9.2. Creating an IBM COS cache bucket Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface: Note Specify the appropriate architecture for enabling the repositories using subscription manager. For instance, For IBM Power Systems, use the following command: For IBM Z infrastructure, use the following command: Alternatively, you can install the mcg package from the OpenShift Container Storage RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/package . Note Choose the correct Product Variant according to your architecture. Procedure Create a NamespaceStore resource. A NamespaceStore represents an underlying storage to be used as a read or write target for the data in Multicloud Object Gateway namespace buckets. From the MCG command-line interface, run the following command: Replace <namespacestore> with the name of the NamespaceStore. Replace <IBM ACCESS KEY> , <IBM SECRET ACCESS KEY> , <IBM COS ENDPOINT> with an IBM access key ID, secret access key and the appropriate regional endpoint that corresponds to the location of the existing IBM bucket. Replace <bucket-name> with an existing IBM bucket name. This argument tells Multicloud Object Gateway which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. You can also add storage resources by applying a YAML. First, Create a secret with the credentials: You must supply and encode your own IBM COS access key ID and secret access key using Base64, and use the results in place of <IBM COS ACCESS KEY ID ENCODED IN BASE64> and <IBM COS SECRET ACCESS KEY ENCODED IN BASE64>`. Replace <namespacestore-secret-name> with a unique name. Then apply the following YAML: Replace <namespacestore> with a unique name. Replace <IBM COS ENDPOINT> with the appropriate IBM COS endpoint. Replace <backingstore-secret-name> with the secret created in the step. Replace <namespace-secret> with the namespace used to create the secret in the step. Replace <target-bucket> with the AWS S3 bucket you created for the namespacestore. Run the following command to create a bucket class: Replace <my-bucket-class> with a unique bucket class name. Replace <backing-store> with the relevant backing store. You can list one or more backingstores separated by commas in this field. Replace <namespacestore> with the namespacestore created in the step. Run the following command to create a bucket using an Object Bucket Claim resource that uses the bucket class defined in step 2. Replace <my-bucket-claim> with a unique name. Replace <custom-bucket-class> with the name of the bucket class created in step 2. 9.10. Scaling Multicloud Object Gateway performance by adding endpoints The Multicloud Object Gateway performance may vary from one environment to another. In some cases, specific applications require faster performance which can be easily addressed by scaling S3 endpoints. The Multicloud Object Gateway resource pool is a group of NooBaa daemon containers that provide two types of services enabled by default: Storage service S3 endpoint service 9.10.1. S3 endpoints in the Multicloud Object Gateway The S3 endpoint is a service that every Multicloud Object Gateway provides by default that handles the heavy lifting data digestion in the Multicloud Object Gateway. The endpoint service handles the inline data chunking, deduplication, compression, and encryption, and it accepts data placement instructions from the Multicloud Object Gateway. 9.10.2. Scaling with storage nodes Prerequisites A running OpenShift Container Storage cluster on OpenShift Container Platform with access to the Multicloud Object Gateway. A storage node in the Multicloud Object Gateway is a NooBaa daemon container attached to one or more Persistent Volumes and used for local object service data storage. NooBaa daemons can be deployed on Kubernetes nodes. This can be done by creating a Kubernetes pool consisting of StatefulSet pods. Procedure In the Multicloud Object Gateway user interface, from the Overview page, click Add Storage Resources : In the window, click Deploy Kubernetes Pool : In the Create Pool step create the target pool for the future installed nodes. In the Configure step, configure the number of requested pods and the size of each PV. For each new pod, one PV is be created. In the Review step, you can find the details of the new pool and select the deployment method you wish to use: local or external deployment. If local deployment is selected, the Kubernetes nodes will deploy within the cluster. If external deployment is selected, you will be provided with a YAML file to run externally. All nodes will be assigned to the pool you chose in the first step, and can be found under Resources Storage resources Resource name : 9.11. Automatic scaling of MultiCloud Object Gateway endpoints The number of MultiCloud Object Gateway (MCG) endpoints scale automatically when the load on the MCG S3 service increases or decreases. {product-name-short} clusters are deployed with one active MCG endpoint. Each MCG endpoint pod is configured by default with 1 CPU and 2Gi memory request, with limits matching the request. When the CPU load on the endpoint crosses over an 80% usage threshold for a consistent period of time, a second endpoint is deployed lowering the load on the first endpoint. When the average CPU load on both endpoints falls below the 80% threshold for a consistent period of time, one of the endpoints is deleted. This feature improves performance and serviceability of the MCG.
|
[
"subscription-manager repos --enable=rh-ocs-4-for-rhel-8-x86_64-rpms yum install mcg",
"subscription-manager repos --enable=rh-ocs-4-for-rhel-8-ppc64le-rpms",
"subscription-manager repos --enable=rh-ocs-4-for-rhel-8-s390x-rpms",
"oc describe noobaa -n openshift-storage",
"Name: noobaa Namespace: openshift-storage Labels: <none> Annotations: <none> API Version: noobaa.io/v1alpha1 Kind: NooBaa Metadata: Creation Timestamp: 2019-07-29T16:22:06Z Generation: 1 Resource Version: 6718822 Self Link: /apis/noobaa.io/v1alpha1/namespaces/openshift-storage/noobaas/noobaa UID: 019cfb4a-b21d-11e9-9a02-06c8de012f9e Spec: Status: Accounts: Admin: Secret Ref: Name: noobaa-admin Namespace: openshift-storage Actual Image: noobaa/noobaa-core:4.0 Observed Generation: 1 Phase: Ready Readme: Welcome to NooBaa! ----------------- Welcome to NooBaa! ----------------- NooBaa Core Version: NooBaa Operator Version: Lets get started: 1. Connect to Management console: Read your mgmt console login information (email & password) from secret: \"noobaa-admin\". kubectl get secret noobaa-admin -n openshift-storage -o json | jq '.data|map_values(@base64d)' Open the management console service - take External IP/DNS or Node Port or use port forwarding: kubectl port-forward -n openshift-storage service/noobaa-mgmt 11443:443 & open https://localhost:11443 2. Test S3 client: kubectl port-forward -n openshift-storage service/s3 10443:443 & 1 NOOBAA_ACCESS_KEY=USD(kubectl get secret noobaa-admin -n openshift-storage -o json | jq -r '.data.AWS_ACCESS_KEY_ID|@base64d') 2 NOOBAA_SECRET_KEY=USD(kubectl get secret noobaa-admin -n openshift-storage -o json | jq -r '.data.AWS_SECRET_ACCESS_KEY|@base64d') alias s3='AWS_ACCESS_KEY_ID=USDNOOBAA_ACCESS_KEY AWS_SECRET_ACCESS_KEY=USDNOOBAA_SECRET_KEY aws --endpoint https://localhost:10443 --no-verify-ssl s3' s3 ls Services: Service Mgmt: External DNS: https://noobaa-mgmt-openshift-storage.apps.mycluster-cluster.qe.rh-ocs.com https://a3406079515be11eaa3b70683061451e-1194613580.us-east-2.elb.amazonaws.com:443 Internal DNS: https://noobaa-mgmt.openshift-storage.svc:443 Internal IP: https://172.30.235.12:443 Node Ports: https://10.0.142.103:31385 Pod Ports: https://10.131.0.19:8443 serviceS3: External DNS: 3 https://s3-openshift-storage.apps.mycluster-cluster.qe.rh-ocs.com https://a340f4e1315be11eaa3b70683061451e-943168195.us-east-2.elb.amazonaws.com:443 Internal DNS: https://s3.openshift-storage.svc:443 Internal IP: https://172.30.86.41:443 Node Ports: https://10.0.142.103:31011 Pod Ports: https://10.131.0.19:6443",
"subscription-manager repos --enable=rh-ocs-4-for-rhel-8-x86_64-rpms yum install mcg",
"subscription-manager repos --enable=rh-ocs-4-for-rhel-8-ppc64le-rpms",
"subscription-manager repos --enable=rh-ocs-4-for-rhel-8-s390x-rpms",
"noobaa status -n openshift-storage",
"INFO[0000] Namespace: openshift-storage INFO[0000] INFO[0000] CRD Status: INFO[0003] ✅ Exists: CustomResourceDefinition \"noobaas.noobaa.io\" INFO[0003] ✅ Exists: CustomResourceDefinition \"backingstores.noobaa.io\" INFO[0003] ✅ Exists: CustomResourceDefinition \"bucketclasses.noobaa.io\" INFO[0004] ✅ Exists: CustomResourceDefinition \"objectbucketclaims.objectbucket.io\" INFO[0004] ✅ Exists: CustomResourceDefinition \"objectbuckets.objectbucket.io\" INFO[0004] INFO[0004] Operator Status: INFO[0004] ✅ Exists: Namespace \"openshift-storage\" INFO[0004] ✅ Exists: ServiceAccount \"noobaa\" INFO[0005] ✅ Exists: Role \"ocs-operator.v0.0.271-6g45f\" INFO[0005] ✅ Exists: RoleBinding \"ocs-operator.v0.0.271-6g45f-noobaa-f9vpj\" INFO[0006] ✅ Exists: ClusterRole \"ocs-operator.v0.0.271-fjhgh\" INFO[0006] ✅ Exists: ClusterRoleBinding \"ocs-operator.v0.0.271-fjhgh-noobaa-pdxn5\" INFO[0006] ✅ Exists: Deployment \"noobaa-operator\" INFO[0006] INFO[0006] System Status: INFO[0007] ✅ Exists: NooBaa \"noobaa\" INFO[0007] ✅ Exists: StatefulSet \"noobaa-core\" INFO[0007] ✅ Exists: Service \"noobaa-mgmt\" INFO[0008] ✅ Exists: Service \"s3\" INFO[0008] ✅ Exists: Secret \"noobaa-server\" INFO[0008] ✅ Exists: Secret \"noobaa-operator\" INFO[0008] ✅ Exists: Secret \"noobaa-admin\" INFO[0009] ✅ Exists: StorageClass \"openshift-storage.noobaa.io\" INFO[0009] ✅ Exists: BucketClass \"noobaa-default-bucket-class\" INFO[0009] ✅ (Optional) Exists: BackingStore \"noobaa-default-backing-store\" INFO[0010] ✅ (Optional) Exists: CredentialsRequest \"noobaa-cloud-creds\" INFO[0010] ✅ (Optional) Exists: PrometheusRule \"noobaa-prometheus-rules\" INFO[0010] ✅ (Optional) Exists: ServiceMonitor \"noobaa-service-monitor\" INFO[0011] ✅ (Optional) Exists: Route \"noobaa-mgmt\" INFO[0011] ✅ (Optional) Exists: Route \"s3\" INFO[0011] ✅ Exists: PersistentVolumeClaim \"db-noobaa-core-0\" INFO[0011] ✅ System Phase is \"Ready\" INFO[0011] ✅ Exists: \"noobaa-admin\" #------------------# #- Mgmt Addresses -# #------------------# ExternalDNS : [https://noobaa-mgmt-openshift-storage.apps.mycluster-cluster.qe.rh-ocs.com https://a3406079515be11eaa3b70683061451e-1194613580.us-east-2.elb.amazonaws.com:443] ExternalIP : [] NodePorts : [https://10.0.142.103:31385] InternalDNS : [https://noobaa-mgmt.openshift-storage.svc:443] InternalIP : [https://172.30.235.12:443] PodPorts : [https://10.131.0.19:8443] #--------------------# #- Mgmt Credentials -# #--------------------# email : [email protected] password : HKLbH1rSuVU0I/souIkSiA== #----------------# #- S3 Addresses -# #----------------# 1 ExternalDNS : [https://s3-openshift-storage.apps.mycluster-cluster.qe.rh-ocs.com https://a340f4e1315be11eaa3b70683061451e-943168195.us-east-2.elb.amazonaws.com:443] ExternalIP : [] NodePorts : [https://10.0.142.103:31011] InternalDNS : [https://s3.openshift-storage.svc:443] InternalIP : [https://172.30.86.41:443] PodPorts : [https://10.131.0.19:6443] #------------------# #- S3 Credentials -# #------------------# 2 AWS_ACCESS_KEY_ID : jVmAsu9FsvRHYmfjTiHV 3 AWS_SECRET_ACCESS_KEY : E//420VNedJfATvVSmDz6FMtsSAzuBv6z180PT5c #------------------# #- Backing Stores -# #------------------# NAME TYPE TARGET-BUCKET PHASE AGE noobaa-default-backing-store aws-s3 noobaa-backing-store-15dc896d-7fe0-4bed-9349-5942211b93c9 Ready 141h35m32s #------------------# #- Bucket Classes -# #------------------# NAME PLACEMENT PHASE AGE noobaa-default-bucket-class {Tiers:[{Placement: BackingStores:[noobaa-default-backing-store]}]} Ready 141h35m33s #-----------------# #- Bucket Claims -# #-----------------# No OBC's found.",
"AWS_ACCESS_KEY_ID=<AWS_ACCESS_KEY_ID> AWS_SECRET_ACCESS_KEY=<AWS_SECRET_ACCESS_KEY> aws --endpoint <ENDPOINT> --no-verify-ssl s3 ls",
"oc adm groups new cluster-admins",
"oc adm policy add-cluster-role-to-group cluster-admin cluster-admins",
"oc adm groups add-users cluster-admins <user-name> <user-name> <user-name>",
"oc adm groups remove-users cluster-admins <user-name> <user-name> <user-name>",
"subscription-manager repos --enable=rh-ocs-4-for-rhel-8-x86_64-rpms yum install mcg",
"subscription-manager repos --enable=rh-ocs-4-for-rhel-8-s390x-rpms",
"noobaa backingstore create aws-s3 <backingstore_name> --access-key=<AWS ACCESS KEY> --secret-key=<AWS SECRET ACCESS KEY> --target-bucket <bucket-name> -n openshift-storage",
"INFO[0001] ✅ Exists: NooBaa \"noobaa\" INFO[0002] ✅ Created: BackingStore \"aws-resource\" INFO[0002] ✅ Created: Secret \"backing-store-secret-aws-resource\"",
"apiVersion: v1 kind: Secret metadata: name: <backingstore-secret-name> namespace: openshift-storage type: Opaque data: AWS_ACCESS_KEY_ID: <AWS ACCESS KEY ID ENCODED IN BASE64> AWS_SECRET_ACCESS_KEY: <AWS SECRET ACCESS KEY ENCODED IN BASE64>",
"apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: bs namespace: openshift-storage spec: awsS3: secret: name: <backingstore-secret-name> namespace: openshift-storage targetBucket: <bucket-name> type: aws-s3",
"subscription-manager repos --enable=rh-ocs-4-for-rhel-8-x86_64-rpms yum install mcg",
"subscription-manager repos --enable=rh-ocs-4-for-rhel-8-ppc64le-rpms",
"subscription-manager repos --enable=rh-ocs-4-for-rhel-8-s390x-rpms",
"noobaa backingstore create ibm-cos <backingstore_name> --access-key=<IBM ACCESS KEY> --secret-key=<IBM SECRET ACCESS KEY> --endpoint=<IBM COS ENDPOINT> --target-bucket <bucket-name> -n openshift-storage",
"INFO[0001] ✅ Exists: NooBaa \"noobaa\" INFO[0002] ✅ Created: BackingStore \"ibm-resource\" INFO[0002] ✅ Created: Secret \"backing-store-secret-ibm-resource\"",
"apiVersion: v1 kind: Secret metadata: name: <backingstore-secret-name> type: Opaque data: IBM_COS_ACCESS_KEY_ID: <IBM COS ACCESS KEY ID ENCODED IN BASE64> IBM_COS_SECRET_ACCESS_KEY: <IBM COS SECRET ACCESS KEY ENCODED IN BASE64>",
"apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: bs namespace: openshift-storage spec: ibmCos: endpoint: <endpoint> secret: name: <backingstore-secret-name> namespace: openshift-storage targetBucket: <bucket-name> type: ibm-cos",
"subscription-manager repos --enable=rh-ocs-4-for-rhel-8-x86_64-rpms yum install mcg",
"subscription-manager repos --enable=rh-ocs-4-for-rhel-8-s390x-rpms",
"noobaa backingstore create azure-blob <backingstore_name> --account-key=<AZURE ACCOUNT KEY> --account-name=<AZURE ACCOUNT NAME> --target-blob-container <blob container name>",
"INFO[0001] ✅ Exists: NooBaa \"noobaa\" INFO[0002] ✅ Created: BackingStore \"azure-resource\" INFO[0002] ✅ Created: Secret \"backing-store-secret-azure-resource\"",
"apiVersion: v1 kind: Secret metadata: name: <backingstore-secret-name> type: Opaque data: AccountName: <AZURE ACCOUNT NAME ENCODED IN BASE64> AccountKey: <AZURE ACCOUNT KEY ENCODED IN BASE64>",
"apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: bs namespace: openshift-storage spec: azureBlob: secret: name: <backingstore-secret-name> namespace: openshift-storage targetBlobContainer: <blob-container-name> type: azure-blob",
"subscription-manager repos --enable=rh-ocs-4-for-rhel-8-x86_64-rpms yum install mcg",
"subscription-manager repos --enable=rh-ocs-4-for-rhel-8-s390x-rpms",
"noobaa backingstore create google-cloud-storage <backingstore_name> --private-key-json-file=<PATH TO GCP PRIVATE KEY JSON FILE> --target-bucket <GCP bucket name>",
"INFO[0001] ✅ Exists: NooBaa \"noobaa\" INFO[0002] ✅ Created: BackingStore \"google-gcp\" INFO[0002] ✅ Created: Secret \"backing-store-google-cloud-storage-gcp\"",
"apiVersion: v1 kind: Secret metadata: name: <backingstore-secret-name> type: Opaque data: GoogleServiceAccountPrivateKeyJson: <GCP PRIVATE KEY ENCODED IN BASE64>",
"apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: bs namespace: openshift-storage spec: googleCloudStorage: secret: name: <backingstore-secret-name> namespace: openshift-storage targetBucket: <target bucket> type: google-cloud-storage",
"subscription-manager repos --enable=rh-ocs-4-for-rhel-8-x86_64-rpms yum install mcg",
"subscription-manager repos --enable=rh-ocs-4-for-rhel-8-s390x-rpms",
"noobaa backingstore create pv-pool <backingstore_name> --num-volumes=<NUMBER OF VOLUMES> --pv-size-gb=<VOLUME SIZE> --storage-class=<LOCAL STORAGE CLASS>",
"INFO[0001] ✅ Exists: NooBaa \"noobaa\" INFO[0002] ✅ Exists: BackingStore \"local-mcg-storage\"",
"apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: <backingstore_name> namespace: openshift-storage spec: pvPool: numVolumes: <NUMBER OF VOLUMES> resources: requests: storage: <VOLUME SIZE> storageClass: <LOCAL STORAGE CLASS> type: pv-pool",
"noobaa backingstore create s3-compatible rgw-resource --access-key=<RGW ACCESS KEY> --secret-key=<RGW SECRET KEY> --target-bucket=<bucket-name> --endpoint=<RGW endpoint>",
"get secret <RGW USER SECRET NAME> -o yaml -n openshift-storage",
"INFO[0001] ✅ Exists: NooBaa \"noobaa\" INFO[0002] ✅ Created: BackingStore \"rgw-resource\" INFO[0002] ✅ Created: Secret \"backing-store-secret-rgw-resource\"",
"apiVersion: ceph.rook.io/v1 kind: CephObjectStoreUser metadata: name: <RGW-Username> namespace: openshift-storage spec: store: ocs-storagecluster-cephobjectstore displayName: \"<Display-name>\"",
"apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: <backingstore-name> namespace: openshift-storage spec: s3Compatible: endpoint: <RGW endpoint> secret: name: <backingstore-secret-name> namespace: openshift-storage signatureVersion: v4 targetBucket: <RGW-bucket-name> type: s3-compatible",
"apiVersion: v1 kind: Secret metadata: name: <namespacestore-secret-name> type: Opaque data: AWS_ACCESS_KEY_ID: <AWS ACCESS KEY ID ENCODED IN BASE64> AWS_SECRET_ACCESS_KEY: <AWS SECRET ACCESS KEY ENCODED IN BASE64>",
"apiVersion: noobaa.io/v1alpha1 kind: NamespaceStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: <resource-name> namespace: openshift-storage spec: awsS3: secret: name: <namespacestore-secret-name> namespace: <namespace-secret> targetBucket: <target-bucket> type: aws-s3",
"apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <my-bucket-class> namespace: openshift-storage spec: namespacePolicy: type: single: resource: <resource>",
"apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <my-bucket-class> namespace: openshift-storage spec: namespacePolicy: type: Multi multi: writeResource: <write-resource> readResources: - <read-resources> - <read-resources>",
"apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: <resource-name> namespace: openshift-storage spec: generateBucketName: <my-bucket> storageClassName: noobaa.noobaa.io additionalConfig: bucketclass: <my-bucket-class>",
"apiVersion: v1 kind: Secret metadata: name: <namespacestore-secret-name> type: Opaque data: IBM_COS_ACCESS_KEY_ID: <IBM COS ACCESS KEY ID ENCODED IN BASE64> IBM_COS_SECRET_ACCESS_KEY: <IBM COS SECRET ACCESS KEY ENCODED IN BASE64>",
"apiVersion: noobaa.io/v1alpha1 kind: NamespaceStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: bs namespace: openshift-storage spec: s3Compatible: endpoint: <IBM COS ENDPOINT> secret: name: <namespacestore-secret-name> namespace: <namespace-secret> signatureVersion: v2 targetBucket: <target-bucket> type: ibm-cos",
"apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <my-bucket-class> namespace: openshift-storage spec: namespacePolicy: type: single: resource: <resource>",
"apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <my-bucket-class> namespace: openshift-storage spec: namespacePolicy: type: Multi multi: writeResource: <write-resource> readResources: - <read-resources> - <read-resources>",
"apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: <resource-name> namespace: openshift-storage spec: generateBucketName: <my-bucket> storageClassName: noobaa.noobaa.io additionalConfig: bucketclass: <my-bucket-class>",
"subscription-manager repos --enable=rh-ocs-4-for-rhel-8-x86_64-rpms yum install mcg",
"subscription-manager repos --enable=rh-ocs-4-for-rhel-8-s390x-rpms",
"noobaa namespacestore create aws-s3 <namespacestore > --access-key <AWS ACCESS KEY> --secret-key <AWS SECRET ACCESS KEY> --target-bucket <bucket-name> -n openshift-storage",
"noobaa bucketclass create namespace-bucketclass single <my-bucket-class> --resource <resource> -n openshift-storage",
"noobaa bucketclass create namespace-bucketclass multi <my-bucket-class> --write-resource <write-resource> --read-resources <read-resources> -n openshift-storage",
"noobaa obc create my-bucket-claim -n openshift-storage --app-namespace my-app --bucketclass <custom-bucket-class>",
"subscription-manager repos --enable=rh-ocs-4-for-rhel-8-x86_64-rpms yum install mcg",
"subscription-manager repos --enable=rh-ocs-4-for-rhel-8-ppc64le-rpms",
"subscription-manager repos --enable=rh-ocs-4-for-rhel-8-s390x-rpms",
"noobaa namespacestore create ibm-cos <namespacestore> --endpoint <IBM COS ENDPOINT> --access-key <IBM ACCESS KEY> --secret-key <IBM SECRET ACCESS KEY> --target-bucket <bucket-name> -n openshift-storage",
"noobaa bucketclass create namespace-bucketclass single <my-bucket-class> --resource <resource> -n openshift-storage",
"noobaa bucketclass create namespace-bucketclass multi <my-bucket-class> --write-resource <write-resource> --read-resources <read-resources> -n openshift-storage",
"noobaa obc create my-bucket-claim -n openshift-storage --app-namespace my-app --bucketclass <custom-bucket-class>",
"noobaa bucketclass create placement-bucketclass mirror-to-aws --backingstores=azure-resource,aws-resource --placement Mirror",
"noobaa obc create mirrored-bucket --bucketclass=mirror-to-aws",
"apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <bucket-class-name> namespace: openshift-storage spec: placementPolicy: tiers: - backingStores: - <backing-store-1> - <backing-store-2> placement: Mirror",
"additionalConfig: bucketclass: mirror-to-aws",
"{ \"Version\": \"NewVersion\", \"Statement\": [ { \"Sid\": \"Example\", \"Effect\": \"Allow\", \"Principal\": [ \"[email protected]\" ], \"Action\": [ \"s3:GetObject\" ], \"Resource\": [ \"arn:aws:s3:::john_bucket\" ] } ] }",
"aws --endpoint ENDPOINT --no-verify-ssl s3api put-bucket-policy --bucket MyBucket --policy BucketPolicy",
"aws --endpoint https://s3-openshift-storage.apps.gogo44.noobaa.org --no-verify-ssl s3api put-bucket-policy -bucket MyBucket --policy file://BucketPolicy",
"apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: <obc-name> spec: generateBucketName: <obc-bucket-name> storageClassName: openshift-storage.noobaa.io",
"apiVersion: batch/v1 kind: Job metadata: name: testjob spec: template: spec: restartPolicy: OnFailure containers: - image: <your application image> name: test env: - name: BUCKET_NAME valueFrom: configMapKeyRef: name: <obc-name> key: BUCKET_NAME - name: BUCKET_HOST valueFrom: configMapKeyRef: name: <obc-name> key: BUCKET_HOST - name: BUCKET_PORT valueFrom: configMapKeyRef: name: <obc-name> key: BUCKET_PORT - name: AWS_ACCESS_KEY_ID valueFrom: secretKeyRef: name: <obc-name> key: AWS_ACCESS_KEY_ID - name: AWS_SECRET_ACCESS_KEY valueFrom: secretKeyRef: name: <obc-name> key: AWS_SECRET_ACCESS_KEY",
"oc apply -f <yaml.file>",
"oc get cm <obc-name>",
"oc get secret <obc_name> -o yaml",
"subscription-manager repos --enable=rh-ocs-4-for-rhel-8-x86_64-rpms yum install mcg",
"subscription-manager repos --enable=rh-ocs-4-for-rhel-8-ppc64le-rpms",
"subscription-manager repos --enable=rh-ocs-4-for-rhel-8-s390x-rpms",
"noobaa obc create <obc-name> -n openshift-storage",
"INFO[0001] ✅ Created: ObjectBucketClaim \"test21obc\"",
"oc get obc -n openshift-storage",
"NAME STORAGE-CLASS PHASE AGE test21obc openshift-storage.noobaa.io Bound 38s",
"oc get obc test21obc -o yaml -n openshift-storage",
"apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: creationTimestamp: \"2019-10-24T13:30:07Z\" finalizers: - objectbucket.io/finalizer generation: 2 labels: app: noobaa bucket-provisioner: openshift-storage.noobaa.io-obc noobaa-domain: openshift-storage.noobaa.io name: test21obc namespace: openshift-storage resourceVersion: \"40756\" selfLink: /apis/objectbucket.io/v1alpha1/namespaces/openshift-storage/objectbucketclaims/test21obc uid: 64f04cba-f662-11e9-bc3c-0295250841af spec: ObjectBucketName: obc-openshift-storage-test21obc bucketName: test21obc-933348a6-e267-4f82-82f1-e59bf4fe3bb4 generateBucketName: test21obc storageClassName: openshift-storage.noobaa.io status: phase: Bound",
"oc get -n openshift-storage secret test21obc -o yaml",
"Example output: apiVersion: v1 data: AWS_ACCESS_KEY_ID: c0M0R2xVanF3ODR3bHBkVW94cmY= AWS_SECRET_ACCESS_KEY: Wi9kcFluSWxHRzlWaFlzNk1hc0xma2JXcjM1MVhqa051SlBleXpmOQ== kind: Secret metadata: creationTimestamp: \"2019-10-24T13:30:07Z\" finalizers: - objectbucket.io/finalizer labels: app: noobaa bucket-provisioner: openshift-storage.noobaa.io-obc noobaa-domain: openshift-storage.noobaa.io name: test21obc namespace: openshift-storage ownerReferences: - apiVersion: objectbucket.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ObjectBucketClaim name: test21obc uid: 64f04cba-f662-11e9-bc3c-0295250841af resourceVersion: \"40751\" selfLink: /api/v1/namespaces/openshift-storage/secrets/test21obc uid: 65117c1c-f662-11e9-9094-0a5305de57bb type: Opaque",
"oc get -n openshift-storage cm test21obc -o yaml",
"apiVersion: v1 data: BUCKET_HOST: 10.0.171.35 BUCKET_NAME: test21obc-933348a6-e267-4f82-82f1-e59bf4fe3bb4 BUCKET_PORT: \"31242\" BUCKET_REGION: \"\" BUCKET_SUBREGION: \"\" kind: ConfigMap metadata: creationTimestamp: \"2019-10-24T13:30:07Z\" finalizers: - objectbucket.io/finalizer labels: app: noobaa bucket-provisioner: openshift-storage.noobaa.io-obc noobaa-domain: openshift-storage.noobaa.io name: test21obc namespace: openshift-storage ownerReferences: - apiVersion: objectbucket.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ObjectBucketClaim name: test21obc uid: 64f04cba-f662-11e9-bc3c-0295250841af resourceVersion: \"40752\" selfLink: /api/v1/namespaces/openshift-storage/configmaps/test21obc uid: 651c6501-f662-11e9-9094-0a5305de57bb",
"subscription-manager repos --enable=rh-ocs-4-for-rhel-8-x86_64-rpms yum install mcg",
"subscription-manager repos --enable=rh-ocs-4-for-rhel-8-s390x-rpms",
"noobaa namespacestore create aws-s3 <namespacestore> --access-key <AWS ACCESS KEY> --secret-key <AWS SECRET ACCESS KEY> --target-bucket <bucket-name>",
"apiVersion: v1 kind: Secret metadata: name: <namespacestore-secret-name> type: Opaque data: AWS_ACCESS_KEY_ID: <AWS ACCESS KEY ID ENCODED IN BASE64> AWS_SECRET_ACCESS_KEY: <AWS SECRET ACCESS KEY ENCODED IN BASE64>",
"apiVersion: noobaa.io/v1alpha1 kind: NamespaceStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: <namespacestore> namespace: openshift-storage spec: awsS3: secret: name: <namespacestore-secret-name> namespace: <namespace-secret> targetBucket: <target-bucket> type: aws-s3",
"noobaa bucketclass create namespace-bucketclass cache <my-cache-bucket-class> --backingstores <backing-store> --hub-resource <namespacestore>",
"noobaa obc create <my-bucket-claim> my-app --bucketclass <custom-bucket-class>",
"subscription-manager repos --enable=rh-ocs-4-for-rhel-8-x86_64-rpms yum install mcg",
"subscription-manager repos --enable=rh-ocs-4-for-rhel-8-ppc64le-rpms",
"subscription-manager repos --enable=rh-ocs-4-for-rhel-8-s390x-rpms",
"noobaa namespacestore create ibm-cos <namespacestore> --endpoint <IBM COS ENDPOINT> --access-key <IBM ACCESS KEY> --secret-key <IBM SECRET ACCESS KEY> --target-bucket <bucket-name>",
"apiVersion: v1 kind: Secret metadata: name: <namespacestore-secret-name> type: Opaque data: IBM_COS_ACCESS_KEY_ID: <IBM COS ACCESS KEY ID ENCODED IN BASE64> IBM_COS_SECRET_ACCESS_KEY: <IBM COS SECRET ACCESS KEY ENCODED IN BASE64>",
"apiVersion: noobaa.io/v1alpha1 kind: NamespaceStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: <namespacestore> namespace: openshift-storage spec: s3Compatible: endpoint: <IBM COS ENDPOINT> secret: name: <backingstore-secret-name> namespace: <namespace-secret> signatureVersion: v2 targetBucket: <target-bucket> type: ibm-cos",
"noobaa bucketclass create namespace-bucketclass cache <my-bucket-class> --backingstores <backing-store> --hubResource <namespacestore>",
"noobaa obc create <my-bucket-claim> my-app --bucketclass <custom-bucket-class>"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/deploying_and_managing_openshift_container_storage_using_red_hat_openstack_platform/multicloud-object-gateway_osp
|
3.2. Boot Loader
|
3.2. Boot Loader Installation media for IBM Power Systems now use the GRUB2 boot loader instead of the previously offered yaboot . For the big endian variant of Red Hat Enterprise Linux for POWER, GRUB2 is preferred but yaboot can also be used. The newly introduced little endian variant requires GRUB2 to boot. The Installation Guide has been updated with instructions for setting up a network boot server for IBM Power Systems using GRUB2 .
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.1_release_notes/ch03s02
|
Chapter 4. Advisories related to this release
|
Chapter 4. Advisories related to this release The following advisories are issued to document bug fixes and CVE fixes included in this release: RHSA-2024:0240 RHSA-2024:0241 RHSA-2024:0242 RHSA-2024:0244 RHSA-2024:0246 RHSA-2024:0267 Revised on 2024-05-03 15:36:05 UTC
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/release_notes_for_red_hat_build_of_openjdk_17.0.10/openjdk-17010-advisory_openjdk
|
Operators
|
Operators OpenShift Container Platform 4.17 Working with Operators in OpenShift Container Platform Red Hat OpenShift Documentation Team
|
[
"etcd ├── manifests │ ├── etcdcluster.crd.yaml │ └── etcdoperator.clusterserviceversion.yaml │ └── secret.yaml │ └── configmap.yaml └── metadata └── annotations.yaml └── dependencies.yaml",
"annotations: operators.operatorframework.io.bundle.mediatype.v1: \"registry+v1\" 1 operators.operatorframework.io.bundle.manifests.v1: \"manifests/\" 2 operators.operatorframework.io.bundle.metadata.v1: \"metadata/\" 3 operators.operatorframework.io.bundle.package.v1: \"test-operator\" 4 operators.operatorframework.io.bundle.channels.v1: \"beta,stable\" 5 operators.operatorframework.io.bundle.channel.default.v1: \"stable\" 6",
"dependencies: - type: olm.package value: packageName: prometheus version: \">0.27.0\" - type: olm.gvk value: group: etcd.database.coreos.com kind: EtcdCluster version: v1beta2",
"Ignore everything except non-object .json and .yaml files **/* !*.json !*.yaml **/objects/*.json **/objects/*.yaml",
"catalog ├── packageA │ └── index.yaml ├── packageB │ ├── .indexignore │ ├── index.yaml │ └── objects │ └── packageB.v0.1.0.clusterserviceversion.yaml └── packageC └── index.json └── deprecations.yaml",
"_Meta: { // schema is required and must be a non-empty string schema: string & !=\"\" // package is optional, but if it's defined, it must be a non-empty string package?: string & !=\"\" // properties is optional, but if it's defined, it must be a list of 0 or more properties properties?: [... #Property] } #Property: { // type is required type: string & !=\"\" // value is required, and it must not be null value: !=null }",
"#Package: { schema: \"olm.package\" // Package name name: string & !=\"\" // A description of the package description?: string // The package's default channel defaultChannel: string & !=\"\" // An optional icon icon?: { base64data: string mediatype: string } }",
"#Channel: { schema: \"olm.channel\" package: string & !=\"\" name: string & !=\"\" entries: [...#ChannelEntry] } #ChannelEntry: { // name is required. It is the name of an `olm.bundle` that // is present in the channel. name: string & !=\"\" // replaces is optional. It is the name of bundle that is replaced // by this entry. It does not have to be present in the entry list. replaces?: string & !=\"\" // skips is optional. It is a list of bundle names that are skipped by // this entry. The skipped bundles do not have to be present in the // entry list. skips?: [...string & !=\"\"] // skipRange is optional. It is the semver range of bundle versions // that are skipped by this entry. skipRange?: string & !=\"\" }",
"#Bundle: { schema: \"olm.bundle\" package: string & !=\"\" name: string & !=\"\" image: string & !=\"\" properties: [...#Property] relatedImages?: [...#RelatedImage] } #Property: { // type is required type: string & !=\"\" // value is required, and it must not be null value: !=null } #RelatedImage: { // image is the image reference image: string & !=\"\" // name is an optional descriptive name for an image that // helps identify its purpose in the context of the bundle name?: string & !=\"\" }",
"schema: olm.deprecations package: my-operator 1 entries: - reference: schema: olm.package 2 message: | 3 The 'my-operator' package is end of life. Please use the 'my-operator-new' package for support. - reference: schema: olm.channel name: alpha 4 message: | The 'alpha' channel is no longer supported. Please switch to the 'stable' channel. - reference: schema: olm.bundle name: my-operator.v1.68.0 5 message: | my-operator.v1.68.0 is deprecated. Uninstall my-operator.v1.68.0 and install my-operator.v1.72.0 for support.",
"my-catalog └── my-operator ├── index.yaml └── deprecations.yaml",
"#PropertyPackage: { type: \"olm.package\" value: { packageName: string & !=\"\" version: string & !=\"\" } }",
"#PropertyGVK: { type: \"olm.gvk\" value: { group: string & !=\"\" version: string & !=\"\" kind: string & !=\"\" } }",
"#PropertyPackageRequired: { type: \"olm.package.required\" value: { packageName: string & !=\"\" versionRange: string & !=\"\" } }",
"#PropertyGVKRequired: { type: \"olm.gvk.required\" value: { group: string & !=\"\" version: string & !=\"\" kind: string & !=\"\" } }",
"name: community-operators repo: quay.io/community-operators/catalog tag: latest references: - name: etcd-operator image: quay.io/etcd-operator/index@sha256:5891b5b522d5df086d0ff0b110fbd9d21bb4fc7163af34d08286a2e846f6be03 - name: prometheus-operator image: quay.io/prometheus-operator/index@sha256:e258d248fda94c63753607f7c4494ee0fcbe92f1a76bfdac795c9d84101eb317",
"name=USD(yq eval '.name' catalog.yaml) mkdir \"USDname\" yq eval '.name + \"/\" + .references[].name' catalog.yaml | xargs mkdir for l in USD(yq e '.name as USDcatalog | .references[] | .image + \"|\" + USDcatalog + \"/\" + .name + \"/index.yaml\"' catalog.yaml); do image=USD(echo USDl | cut -d'|' -f1) file=USD(echo USDl | cut -d'|' -f2) opm render \"USDimage\" > \"USDfile\" done opm generate dockerfile \"USDname\" indexImage=USD(yq eval '.repo + \":\" + .tag' catalog.yaml) docker build -t \"USDindexImage\" -f \"USDname.Dockerfile\" . docker push \"USDindexImage\"",
"\\ufeffapiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: generation: 1 name: example-catalog 1 namespace: openshift-marketplace 2 annotations: olm.catalogImageTemplate: 3 \"quay.io/example-org/example-catalog:v{kube_major_version}.{kube_minor_version}.{kube_patch_version}\" spec: displayName: Example Catalog 4 image: quay.io/example-org/example-catalog:v1 5 priority: -400 6 publisher: Example Org sourceType: grpc 7 grpcPodConfig: securityContextConfig: <security_mode> 8 nodeSelector: 9 custom_label: <label> priorityClassName: system-cluster-critical 10 tolerations: 11 - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoSchedule\" updateStrategy: registryPoll: 12 interval: 30m0s status: connectionState: address: example-catalog.openshift-marketplace.svc:50051 lastConnect: 2021-08-26T18:14:31Z lastObservedState: READY 13 latestImageRegistryPoll: 2021-08-26T18:46:25Z 14 registryService: 15 createdAt: 2021-08-26T16:16:37Z port: 50051 protocol: grpc serviceName: example-catalog serviceNamespace: openshift-marketplace",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: example-operator namespace: example-namespace spec: channel: stable name: example-operator source: example-catalog sourceNamespace: openshift-marketplace",
"registry.redhat.io/redhat/redhat-operator-index:v4.16",
"registry.redhat.io/redhat/redhat-operator-index:v4.17",
"apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: generation: 1 name: example-catalog namespace: openshift-marketplace annotations: olm.catalogImageTemplate: \"quay.io/example-org/example-catalog:v{kube_major_version}.{kube_minor_version}\" spec: displayName: Example Catalog image: quay.io/example-org/example-catalog:v1.30 priority: -400 publisher: Example Org",
"quay.io/example-org/example-catalog:v1.30",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: example-operator namespace: example-namespace spec: channel: stable name: example-operator source: example-catalog sourceNamespace: openshift-marketplace",
"apiVersion: operators.coreos.com/v1alpha1 kind: InstallPlan metadata: name: install-abcde namespace: operators spec: approval: Automatic approved: true clusterServiceVersionNames: - my-operator.v1.0.1 generation: 1 status: catalogSources: [] conditions: - lastTransitionTime: '2021-01-01T20:17:27Z' lastUpdateTime: '2021-01-01T20:17:27Z' status: 'True' type: Installed phase: Complete plan: - resolving: my-operator.v1.0.1 resource: group: operators.coreos.com kind: ClusterServiceVersion manifest: >- name: my-operator.v1.0.1 sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1alpha1 status: Created - resolving: my-operator.v1.0.1 resource: group: apiextensions.k8s.io kind: CustomResourceDefinition manifest: >- name: webservers.web.servers.org sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1beta1 status: Created - resolving: my-operator.v1.0.1 resource: group: '' kind: ServiceAccount manifest: >- name: my-operator sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1 status: Created - resolving: my-operator.v1.0.1 resource: group: rbac.authorization.k8s.io kind: Role manifest: >- name: my-operator.v1.0.1-my-operator-6d7cbc6f57 sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1 status: Created - resolving: my-operator.v1.0.1 resource: group: rbac.authorization.k8s.io kind: RoleBinding manifest: >- name: my-operator.v1.0.1-my-operator-6d7cbc6f57 sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1 status: Created",
"packageName: example channels: - name: alpha currentCSV: example.v0.1.2 - name: beta currentCSV: example.v0.1.3 defaultChannel: alpha",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: etcdoperator.v0.9.2 namespace: placeholder annotations: spec: displayName: etcd description: Etcd Operator replaces: etcdoperator.v0.9.0 skips: - etcdoperator.v0.9.1",
"olm.skipRange: <semver_range>",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: elasticsearch-operator.v4.1.2 namespace: <namespace> annotations: olm.skipRange: '>=4.1.0 <4.1.2'",
"properties: - type: olm.kubeversion value: version: \"1.16.0\"",
"properties: - property: type: color value: red - property: type: shape value: square - property: type: olm.gvk value: group: olm.coreos.io version: v1alpha1 kind: myresource",
"dependencies: - type: olm.package value: packageName: prometheus version: \">0.27.0\" - type: olm.gvk value: group: etcd.database.coreos.com kind: EtcdCluster version: v1beta2",
"type: olm.constraint value: failureMessage: 'require to have \"certified\"' cel: rule: 'properties.exists(p, p.type == \"certified\")'",
"type: olm.constraint value: failureMessage: 'require to have \"certified\" and \"stable\" properties' cel: rule: 'properties.exists(p, p.type == \"certified\") && properties.exists(p, p.type == \"stable\")'",
"schema: olm.bundle name: red.v1.0.0 properties: - type: olm.constraint value: failureMessage: All are required for Red because all: constraints: - failureMessage: Package blue is needed for package: name: blue versionRange: '>=1.0.0' - failureMessage: GVK Green/v1 is needed for gvk: group: greens.example.com version: v1 kind: Green",
"schema: olm.bundle name: red.v1.0.0 properties: - type: olm.constraint value: failureMessage: Any are required for Red because any: constraints: - gvk: group: blues.example.com version: v1beta1 kind: Blue - gvk: group: blues.example.com version: v1beta2 kind: Blue - gvk: group: blues.example.com version: v1 kind: Blue",
"schema: olm.bundle name: red.v1.0.0 properties: - type: olm.constraint value: all: constraints: - failureMessage: Package blue is needed for package: name: blue versionRange: '>=1.0.0' - failureMessage: Cannot be required for Red because not: constraints: - gvk: group: greens.example.com version: v1alpha1 kind: greens",
"schema: olm.bundle name: red.v1.0.0 properties: - type: olm.constraint value: failureMessage: Required for Red because any: constraints: - all: constraints: - package: name: blue versionRange: '>=1.0.0' - gvk: group: blues.example.com version: v1 kind: Blue - all: constraints: - package: name: blue versionRange: '<1.0.0' - gvk: group: blues.example.com version: v1beta1 kind: Blue",
"apiVersion: \"operators.coreos.com/v1alpha1\" kind: \"CatalogSource\" metadata: name: \"my-operators\" namespace: \"operators\" spec: sourceType: grpc grpcPodConfig: securityContextConfig: <security_mode> 1 image: example.com/my/operator-index:v1 displayName: \"My Operators\" priority: 100",
"dependencies: - type: olm.package value: packageName: etcd version: \">3.1.0\" - type: olm.gvk value: group: etcd.database.coreos.com kind: EtcdCluster version: v1beta2",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: my-group namespace: my-namespace spec: targetNamespaces: - my-namespace",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: my-group namespace: my-namespace spec: selector: cool.io/prod: \"true\"",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: my-group namespace: my-namespace",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: annotations: olm.providedAPIs: PackageManifest.v1alpha1.packages.apps.redhat.com name: olm-operators namespace: local spec: selector: {} serviceAccountName: metadata: creationTimestamp: null targetNamespaces: - local status: lastUpdated: 2019-02-19T16:18:28Z namespaces: - local",
"cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OLMConfig metadata: name: cluster spec: features: disableCopiedCSVs: false EOF",
"cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OLMConfig metadata: name: cluster spec: features: disableCopiedCSVs: true EOF",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-monitoring namespace: cluster-monitoring annotations: olm.providedAPIs: Alertmanager.v1.monitoring.coreos.com,Prometheus.v1.monitoring.coreos.com,PrometheusRule.v1.monitoring.coreos.com,ServiceMonitor.v1.monitoring.coreos.com spec: staticProvidedAPIs: true selector: matchLabels: something.cool.io/cluster-monitoring: \"true\"",
"attenuated service account query failed - more than one operator group(s) are managing this namespace count=2",
"apiVersion: operators.coreos.com/v1 kind: OperatorCondition metadata: name: my-operator namespace: operators spec: conditions: - type: Upgradeable 1 status: \"False\" 2 reason: \"migration\" message: \"The Operator is performing a migration.\" lastTransitionTime: \"2020-08-24T23:15:55Z\"",
"apiVersion: config.openshift.io/v1 kind: OperatorHub metadata: name: cluster spec: disableAllDefaultSources: true 1 sources: [ 2 { name: \"community-operators\", disabled: false } ]",
"registry.redhat.io/redhat/redhat-operator-index:v4.8",
"registry.redhat.io/redhat/redhat-operator-index:v4.9",
"apiVersion: apiextensions.k8s.io/v1 1 kind: CustomResourceDefinition metadata: name: crontabs.stable.example.com 2 spec: group: stable.example.com 3 versions: - name: v1 4 served: true storage: true schema: openAPIV3Schema: type: object properties: spec: type: object properties: cronSpec: type: string image: type: string replicas: type: integer scope: Namespaced 5 names: plural: crontabs 6 singular: crontab 7 kind: CronTab 8 shortNames: - ct 9",
"oc create -f <file_name>.yaml",
"/apis/<spec:group>/<spec:version>/<scope>/*/<names-plural>/",
"/apis/stable.example.com/v1/namespaces/*/crontabs/",
"kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 1 metadata: name: aggregate-cron-tabs-admin-edit 2 labels: rbac.authorization.k8s.io/aggregate-to-admin: \"true\" 3 rbac.authorization.k8s.io/aggregate-to-edit: \"true\" 4 rules: - apiGroups: [\"stable.example.com\"] 5 resources: [\"crontabs\"] 6 verbs: [\"get\", \"list\", \"watch\", \"create\", \"update\", \"patch\", \"delete\", \"deletecollection\"] 7 --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: aggregate-cron-tabs-view 8 labels: # Add these permissions to the \"view\" default role. rbac.authorization.k8s.io/aggregate-to-view: \"true\" 9 rbac.authorization.k8s.io/aggregate-to-cluster-reader: \"true\" 10 rules: - apiGroups: [\"stable.example.com\"] 11 resources: [\"crontabs\"] 12 verbs: [\"get\", \"list\", \"watch\"] 13",
"oc create -f <file_name>.yaml",
"apiVersion: \"stable.example.com/v1\" 1 kind: CronTab 2 metadata: name: my-new-cron-object 3 finalizers: 4 - finalizer.stable.example.com spec: 5 cronSpec: \"* * * * /5\" image: my-awesome-cron-image",
"oc create -f <file_name>.yaml",
"oc get <kind>",
"oc get crontab",
"NAME KIND my-new-cron-object CronTab.v1.stable.example.com",
"oc get crontabs",
"oc get crontab",
"oc get ct",
"oc get <kind> -o yaml",
"oc get ct -o yaml",
"apiVersion: v1 items: - apiVersion: stable.example.com/v1 kind: CronTab metadata: clusterName: \"\" creationTimestamp: 2017-05-31T12:56:35Z deletionGracePeriodSeconds: null deletionTimestamp: null name: my-new-cron-object namespace: default resourceVersion: \"285\" selfLink: /apis/stable.example.com/v1/namespaces/default/crontabs/my-new-cron-object uid: 9423255b-4600-11e7-af6a-28d2447dc82b spec: cronSpec: '* * * * /5' 1 image: my-awesome-cron-image 2",
"apiVersion: \"stable.example.com/v1\" 1 kind: CronTab 2 metadata: name: my-new-cron-object 3 finalizers: 4 - finalizer.stable.example.com spec: 5 cronSpec: \"* * * * /5\" image: my-awesome-cron-image",
"oc create -f <file_name>.yaml",
"oc get <kind>",
"oc get crontab",
"NAME KIND my-new-cron-object CronTab.v1.stable.example.com",
"oc get crontabs",
"oc get crontab",
"oc get ct",
"oc get <kind> -o yaml",
"oc get ct -o yaml",
"apiVersion: v1 items: - apiVersion: stable.example.com/v1 kind: CronTab metadata: clusterName: \"\" creationTimestamp: 2017-05-31T12:56:35Z deletionGracePeriodSeconds: null deletionTimestamp: null name: my-new-cron-object namespace: default resourceVersion: \"285\" selfLink: /apis/stable.example.com/v1/namespaces/default/crontabs/my-new-cron-object uid: 9423255b-4600-11e7-af6a-28d2447dc82b spec: cronSpec: '* * * * /5' 1 image: my-awesome-cron-image 2",
"oc get csv",
"oc policy add-role-to-user edit <user> -n <target_project>",
"oc get packagemanifests -n openshift-marketplace",
"NAME CATALOG AGE 3scale-operator Red Hat Operators 91m advanced-cluster-management Red Hat Operators 91m amq7-cert-manager Red Hat Operators 91m couchbase-enterprise-certified Certified Operators 91m crunchy-postgres-operator Certified Operators 91m mongodb-enterprise Certified Operators 91m etcd Community Operators 91m jaeger Community Operators 91m kubefed Community Operators 91m",
"oc describe packagemanifests <operator_name> -n openshift-marketplace",
"Kind: PackageManifest Install Modes: 1 Supported: true Type: OwnNamespace Supported: true Type: SingleNamespace Supported: false Type: MultiNamespace Supported: true Type: AllNamespaces Entries: Name: example-operator.v3.7.11 Version: 3.7.11 Name: example-operator.v3.7.10 Version: 3.7.10 Name: stable-3.7 2 Entries: Name: example-operator.v3.8.5 Version: 3.8.5 Name: example-operator.v3.8.4 Version: 3.8.4 Name: stable-3.8 3 Default Channel: stable-3.8 4",
"oc get packagemanifests <operator_name> -n <catalog_namespace> -o yaml",
"oc get packagemanifest --selector=catalog=<catalogsource_name> --field-selector metadata.name=<operator_name> -n <catalog_namespace> -o yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <operatorgroup_name> namespace: <namespace> 1 spec: targetNamespaces: - <namespace> 2",
"oc apply -f operatorgroup.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: <subscription_name> namespace: <namespace_per_install_mode> 1 spec: channel: <channel_name> 2 name: <operator_name> 3 source: <catalog_name> 4 sourceNamespace: <catalog_source_namespace> 5 config: env: 6 - name: ARGS value: \"-v=10\" envFrom: 7 - secretRef: name: license-secret volumes: 8 - name: <volume_name> configMap: name: <configmap_name> volumeMounts: 9 - mountPath: <directory_name> name: <volume_name> tolerations: 10 - operator: \"Exists\" resources: 11 requests: memory: \"64Mi\" cpu: \"250m\" limits: memory: \"128Mi\" cpu: \"500m\" nodeSelector: 12 foo: bar",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: example-operator namespace: example-operator spec: channel: stable-3.7 installPlanApproval: Manual 1 name: example-operator source: custom-operators sourceNamespace: openshift-marketplace startingCSV: example-operator.v3.7.10 2",
"kind: Subscription spec: installPlanApproval: Manual 1",
"kind: Subscription spec: config: env: - name: ROLEARN value: \"<role_arn>\" 1",
"kind: Subscription spec: config: env: - name: CLIENTID value: \"<client_id>\" 1 - name: TENANTID value: \"<tenant_id>\" 2 - name: SUBSCRIPTIONID value: \"<subscription_id>\" 3",
"kind: Subscription spec: config: env: - name: AUDIENCE value: \"<audience_url>\" 1 - name: SERVICE_ACCOUNT_EMAIL value: \"<service_account_email>\" 2",
"//iam.googleapis.com/projects/<project_number>/locations/global/workloadIdentityPools/<pool_id>/providers/<provider_id>",
"<service_account_name>@<project_id>.iam.gserviceaccount.com",
"oc apply -f subscription.yaml",
"oc describe subscription <subscription_name> -n <namespace>",
"oc describe operatorgroup <operatorgroup_name> -n <namespace>",
"oc get packagemanifests -n openshift-marketplace",
"NAME CATALOG AGE 3scale-operator Red Hat Operators 91m advanced-cluster-management Red Hat Operators 91m amq7-cert-manager Red Hat Operators 91m couchbase-enterprise-certified Certified Operators 91m crunchy-postgres-operator Certified Operators 91m mongodb-enterprise Certified Operators 91m etcd Community Operators 91m jaeger Community Operators 91m kubefed Community Operators 91m",
"oc describe packagemanifests <operator_name> -n openshift-marketplace",
"Kind: PackageManifest Install Modes: 1 Supported: true Type: OwnNamespace Supported: true Type: SingleNamespace Supported: false Type: MultiNamespace Supported: true Type: AllNamespaces Entries: Name: example-operator.v3.7.11 Version: 3.7.11 Name: example-operator.v3.7.10 Version: 3.7.10 Name: stable-3.7 2 Entries: Name: example-operator.v3.8.5 Version: 3.8.5 Name: example-operator.v3.8.4 Version: 3.8.4 Name: stable-3.8 3 Default Channel: stable-3.8 4",
"oc get packagemanifests <operator_name> -n <catalog_namespace> -o yaml",
"oc get packagemanifest --selector=catalog=<catalogsource_name> --field-selector metadata.name=<operator_name> -n <catalog_namespace> -o yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <operatorgroup_name> namespace: <namespace> 1 spec: targetNamespaces: - <namespace> 2",
"oc apply -f operatorgroup.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: <subscription_name> namespace: <namespace_per_install_mode> 1 spec: channel: <channel_name> 2 name: <operator_name> 3 source: <catalog_name> 4 sourceNamespace: <catalog_source_namespace> 5 config: env: 6 - name: ARGS value: \"-v=10\" envFrom: 7 - secretRef: name: license-secret volumes: 8 - name: <volume_name> configMap: name: <configmap_name> volumeMounts: 9 - mountPath: <directory_name> name: <volume_name> tolerations: 10 - operator: \"Exists\" resources: 11 requests: memory: \"64Mi\" cpu: \"250m\" limits: memory: \"128Mi\" cpu: \"500m\" nodeSelector: 12 foo: bar",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: example-operator namespace: example-operator spec: channel: stable-3.7 installPlanApproval: Manual 1 name: example-operator source: custom-operators sourceNamespace: openshift-marketplace startingCSV: example-operator.v3.7.10 2",
"kind: Subscription spec: installPlanApproval: Manual 1",
"kind: Subscription spec: config: env: - name: ROLEARN value: \"<role_arn>\" 1",
"kind: Subscription spec: config: env: - name: CLIENTID value: \"<client_id>\" 1 - name: TENANTID value: \"<tenant_id>\" 2 - name: SUBSCRIPTIONID value: \"<subscription_id>\" 3",
"kind: Subscription spec: config: env: - name: AUDIENCE value: \"<audience_url>\" 1 - name: SERVICE_ACCOUNT_EMAIL value: \"<service_account_email>\" 2",
"//iam.googleapis.com/projects/<project_number>/locations/global/workloadIdentityPools/<pool_id>/providers/<provider_id>",
"<service_account_name>@<project_id>.iam.gserviceaccount.com",
"oc apply -f subscription.yaml",
"oc describe subscription <subscription_name> -n <namespace>",
"oc describe operatorgroup <operatorgroup_name> -n <namespace>",
"apiVersion: v1 kind: Namespace metadata: name: team1-operator",
"oc create -f team1-operator.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: team1-operatorgroup namespace: team1-operator spec: targetNamespaces: - team1 1",
"oc create -f team1-operatorgroup.yaml",
"apiVersion: v1 kind: Namespace metadata: name: global-operators",
"oc create -f global-operators.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: global-operatorgroup namespace: global-operators",
"oc create -f global-operatorgroup.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: nodeAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-163-94.us-west-2.compute.internal #",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: nodeAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/arch operator: In values: - arm64 - key: kubernetes.io/os operator: In values: - linux #",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: podAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - test topologyKey: kubernetes.io/hostname #",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: podAntiAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: cpu operator: In values: - high topologyKey: kubernetes.io/hostname #",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: 1 nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-185-229.ec2.internal #",
"oc get pods -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES custom-metrics-autoscaler-operator-5dcc45d656-bhshg 1/1 Running 0 50s 10.131.0.20 ip-10-0-185-229.ec2.internal <none> <none>",
"oc get subscription.operators.coreos.com serverless-operator -n openshift-serverless -o yaml | grep currentCSV",
"currentCSV: serverless-operator.v1.28.0",
"oc delete subscription.operators.coreos.com serverless-operator -n openshift-serverless",
"subscription.operators.coreos.com \"serverless-operator\" deleted",
"oc delete clusterserviceversion serverless-operator.v1.28.0 -n openshift-serverless",
"clusterserviceversion.operators.coreos.com \"serverless-operator.v1.28.0\" deleted",
"ImagePullBackOff for Back-off pulling image \"example.com/openshift4/ose-elasticsearch-operator-bundle@sha256:6d2587129c846ec28d384540322b40b05833e7e00b25cca584e004af9a1d292e\"",
"rpc error: code = Unknown desc = error pinging docker registry example.com: Get \"https://example.com/v2/\": dial tcp: lookup example.com on 10.0.0.1:53: no such host",
"oc get sub,csv -n <namespace>",
"NAME PACKAGE SOURCE CHANNEL subscription.operators.coreos.com/elasticsearch-operator elasticsearch-operator redhat-operators 5.0 NAME DISPLAY VERSION REPLACES PHASE clusterserviceversion.operators.coreos.com/elasticsearch-operator.5.0.0-65 OpenShift Elasticsearch Operator 5.0.0-65 Succeeded",
"oc delete subscription <subscription_name> -n <namespace>",
"oc delete csv <csv_name> -n <namespace>",
"oc get job,configmap -n openshift-marketplace",
"NAME COMPLETIONS DURATION AGE job.batch/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 1/1 26s 9m30s NAME DATA AGE configmap/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 3 9m30s",
"oc delete job <job_name> -n openshift-marketplace",
"oc delete configmap <configmap_name> -n openshift-marketplace",
"oc get sub,csv,installplan -n <namespace>",
"oc get csvs -n openshift",
"oc apply -f - <<EOF apiVersion: operators.coreos.com/v1 kind: OLMConfig metadata: name: cluster spec: features: disableCopiedCSVs: true 1 EOF",
"oc get events",
"LAST SEEN TYPE REASON OBJECT MESSAGE 85s Warning DisabledCopiedCSVs clusterserviceversion/my-csv.v1.0.0 CSV copying disabled for operators/my-csv.v1.0.0",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: etcd-config-test namespace: openshift-operators spec: config: env: - name: HTTP_PROXY value: test_http - name: HTTPS_PROXY value: test_https - name: NO_PROXY value: test channel: clusterwide-alpha installPlanApproval: Automatic name: etcd source: community-operators sourceNamespace: openshift-marketplace startingCSV: etcdoperator.v0.9.4-clusterwide",
"oc get deployment -n openshift-operators etcd-operator -o yaml | grep -i \"PROXY\" -A 2",
"- name: HTTP_PROXY value: test_http - name: HTTPS_PROXY value: test_https - name: NO_PROXY value: test image: quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21088a98b93838e284a6086b13917f96b0d9c",
"apiVersion: v1 kind: ConfigMap metadata: name: trusted-ca 1 labels: config.openshift.io/inject-trusted-cabundle: \"true\" 2",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: my-operator spec: package: etcd channel: alpha config: 1 selector: matchLabels: <labels_for_pods> 2 volumes: 3 - name: trusted-ca configMap: name: trusted-ca items: - key: ca-bundle.crt 4 path: tls-ca-bundle.pem 5 volumeMounts: 6 - name: trusted-ca mountPath: /etc/pki/ca-trust/extracted/pem readOnly: true",
"oc get subs -n <operator_namespace>",
"oc describe sub <subscription_name> -n <operator_namespace>",
"Name: cluster-logging Namespace: openshift-logging Labels: operators.coreos.com/cluster-logging.openshift-logging= Annotations: <none> API Version: operators.coreos.com/v1alpha1 Kind: Subscription Conditions: Last Transition Time: 2019-07-29T13:42:57Z Message: all available catalogsources are healthy Reason: AllCatalogSourcesHealthy Status: False Type: CatalogSourcesUnhealthy",
"oc get catalogsources -n openshift-marketplace",
"NAME DISPLAY TYPE PUBLISHER AGE certified-operators Certified Operators grpc Red Hat 55m community-operators Community Operators grpc Red Hat 55m example-catalog Example Catalog grpc Example Org 2m25s redhat-marketplace Red Hat Marketplace grpc Red Hat 55m redhat-operators Red Hat Operators grpc Red Hat 55m",
"oc describe catalogsource example-catalog -n openshift-marketplace",
"Name: example-catalog Namespace: openshift-marketplace Labels: <none> Annotations: operatorframework.io/managed-by: marketplace-operator target.workload.openshift.io/management: {\"effect\": \"PreferredDuringScheduling\"} API Version: operators.coreos.com/v1alpha1 Kind: CatalogSource Status: Connection State: Address: example-catalog.openshift-marketplace.svc:50051 Last Connect: 2021-09-09T17:07:35Z Last Observed State: TRANSIENT_FAILURE Registry Service: Created At: 2021-09-09T17:05:45Z Port: 50051 Protocol: grpc Service Name: example-catalog Service Namespace: openshift-marketplace",
"oc get pods -n openshift-marketplace",
"NAME READY STATUS RESTARTS AGE certified-operators-cv9nn 1/1 Running 0 36m community-operators-6v8lp 1/1 Running 0 36m marketplace-operator-86bfc75f9b-jkgbc 1/1 Running 0 42m example-catalog-bwt8z 0/1 ImagePullBackOff 0 3m55s redhat-marketplace-57p8c 1/1 Running 0 36m redhat-operators-smxx8 1/1 Running 0 36m",
"oc describe pod example-catalog-bwt8z -n openshift-marketplace",
"Name: example-catalog-bwt8z Namespace: openshift-marketplace Priority: 0 Node: ci-ln-jyryyg2-f76d1-ggdbq-worker-b-vsxjd/10.0.128.2 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 48s default-scheduler Successfully assigned openshift-marketplace/example-catalog-bwt8z to ci-ln-jyryyf2-f76d1-fgdbq-worker-b-vsxjd Normal AddedInterface 47s multus Add eth0 [10.131.0.40/23] from openshift-sdn Normal BackOff 20s (x2 over 46s) kubelet Back-off pulling image \"quay.io/example-org/example-catalog:v1\" Warning Failed 20s (x2 over 46s) kubelet Error: ImagePullBackOff Normal Pulling 8s (x3 over 47s) kubelet Pulling image \"quay.io/example-org/example-catalog:v1\" Warning Failed 8s (x3 over 47s) kubelet Failed to pull image \"quay.io/example-org/example-catalog:v1\": rpc error: code = Unknown desc = reading manifest v1 in quay.io/example-org/example-catalog: unauthorized: access to the requested resource is not authorized Warning Failed 8s (x3 over 47s) kubelet Error: ErrImagePull",
"oc edit operatorcondition <name>",
"apiVersion: operators.coreos.com/v2 kind: OperatorCondition metadata: name: my-operator namespace: operators spec: overrides: - type: Upgradeable 1 status: \"True\" reason: \"upgradeIsSafe\" message: \"This is a known issue with the Operator where it always reports that it cannot be upgraded.\" conditions: - type: Upgradeable status: \"False\" reason: \"migration\" message: \"The operator is performing a migration.\" lastTransitionTime: \"2020-08-24T23:15:55Z\"",
"cat <<EOF | oc create -f - apiVersion: v1 kind: Namespace metadata: name: scoped EOF",
"cat <<EOF | oc create -f - apiVersion: v1 kind: ServiceAccount metadata: name: scoped namespace: scoped EOF",
"cat <<EOF | oc create -f - apiVersion: v1 kind: Secret type: kubernetes.io/service-account-token 1 metadata: name: scoped namespace: scoped annotations: kubernetes.io/service-account.name: scoped EOF",
"cat <<EOF | oc create -f - apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: scoped namespace: scoped rules: - apiGroups: [\"*\"] resources: [\"*\"] verbs: [\"*\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: scoped-bindings namespace: scoped roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: scoped subjects: - kind: ServiceAccount name: scoped namespace: scoped EOF",
"cat <<EOF | oc create -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: scoped namespace: scoped spec: serviceAccountName: scoped 1 targetNamespaces: - scoped EOF",
"cat <<EOF | oc create -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-cert-manager-operator namespace: scoped spec: channel: stable-v1 name: openshift-cert-manager-operator source: <catalog_source_name> 1 sourceNamespace: <catalog_source_namespace> 2 EOF",
"kind: Role rules: - apiGroups: [\"operators.coreos.com\"] resources: [\"subscriptions\", \"clusterserviceversions\"] verbs: [\"get\", \"create\", \"update\", \"patch\"] - apiGroups: [\"\"] resources: [\"services\", \"serviceaccounts\"] verbs: [\"get\", \"create\", \"update\", \"patch\"] - apiGroups: [\"rbac.authorization.k8s.io\"] resources: [\"roles\", \"rolebindings\"] verbs: [\"get\", \"create\", \"update\", \"patch\"] - apiGroups: [\"apps\"] 1 resources: [\"deployments\"] verbs: [\"list\", \"watch\", \"get\", \"create\", \"update\", \"patch\", \"delete\"] - apiGroups: [\"\"] 2 resources: [\"pods\"] verbs: [\"list\", \"watch\", \"get\", \"create\", \"update\", \"patch\", \"delete\"]",
"kind: ClusterRole 1 rules: - apiGroups: [\"\"] resources: [\"secrets\"] verbs: [\"get\"] --- kind: Role rules: - apiGroups: [\"\"] resources: [\"secrets\"] verbs: [\"create\", \"update\", \"patch\"]",
"apiVersion: operators.coreos.com/v1 kind: Subscription metadata: name: etcd namespace: scoped status: installPlanRef: apiVersion: operators.coreos.com/v1 kind: InstallPlan name: install-4plp8 namespace: scoped resourceVersion: \"117359\" uid: 2c1df80e-afea-11e9-bce3-5254009c9c23",
"apiVersion: operators.coreos.com/v1 kind: InstallPlan status: conditions: - lastTransitionTime: \"2019-07-26T21:13:10Z\" lastUpdateTime: \"2019-07-26T21:13:10Z\" message: 'error creating clusterrole etcdoperator.v0.9.4-clusterwide-dsfx4: clusterroles.rbac.authorization.k8s.io is forbidden: User \"system:serviceaccount:scoped:scoped\" cannot create resource \"clusterroles\" in API group \"rbac.authorization.k8s.io\" at the cluster scope' reason: InstallComponentFailed status: \"False\" type: Installed phase: Failed",
"mkdir <catalog_dir>",
"opm generate dockerfile <catalog_dir> -i registry.redhat.io/openshift4/ose-operator-registry-rhel9:v4.17 1",
". 1 ├── <catalog_dir> 2 └── <catalog_dir>.Dockerfile 3",
"opm init <operator_name> \\ 1 --default-channel=preview \\ 2 --description=./README.md \\ 3 --icon=./operator-icon.svg \\ 4 --output yaml \\ 5 > <catalog_dir>/index.yaml 6",
"opm render <registry>/<namespace>/<bundle_image_name>:<tag> \\ 1 --output=yaml >> <catalog_dir>/index.yaml 2",
"--- schema: olm.channel package: <operator_name> name: preview entries: - name: <operator_name>.v0.1.0 1",
"opm validate <catalog_dir>",
"echo USD?",
"0",
"podman build . -f <catalog_dir>.Dockerfile -t <registry>/<namespace>/<catalog_image_name>:<tag>",
"podman login <registry>",
"podman push <registry>/<namespace>/<catalog_image_name>:<tag>",
"opm render <registry>/<namespace>/<catalog_image_name>:<tag> -o yaml > <catalog_dir>/index.yaml",
"--- defaultChannel: release-2.7 icon: base64data: <base64_string> mediatype: image/svg+xml name: example-operator schema: olm.package --- entries: - name: example-operator.v2.7.0 skipRange: '>=2.6.0 <2.7.0' - name: example-operator.v2.7.1 replaces: example-operator.v2.7.0 skipRange: '>=2.6.0 <2.7.1' - name: example-operator.v2.7.2 replaces: example-operator.v2.7.1 skipRange: '>=2.6.0 <2.7.2' - name: example-operator.v2.7.3 replaces: example-operator.v2.7.2 skipRange: '>=2.6.0 <2.7.3' - name: example-operator.v2.7.4 replaces: example-operator.v2.7.3 skipRange: '>=2.6.0 <2.7.4' name: release-2.7 package: example-operator schema: olm.channel --- image: example.com/example-inc/example-operator-bundle@sha256:<digest> name: example-operator.v2.7.0 package: example-operator properties: - type: olm.gvk value: group: example-group.example.io kind: MyObject version: v1alpha1 - type: olm.gvk value: group: example-group.example.io kind: MyOtherObject version: v1beta1 - type: olm.package value: packageName: example-operator version: 2.7.0 - type: olm.bundle.object value: data: <base64_string> - type: olm.bundle.object value: data: <base64_string> relatedImages: - image: example.com/example-inc/example-related-image@sha256:<digest> name: example-related-image schema: olm.bundle ---",
"opm validate <catalog_dir>",
"podman build . -f <catalog_dir>.Dockerfile -t <registry>/<namespace>/<catalog_image_name>:<tag>",
"podman push <registry>/<namespace>/<catalog_image_name>:<tag>",
"opm index add --bundles <registry>/<namespace>/<bundle_image_name>:<tag> \\ 1 --tag <registry>/<namespace>/<index_image_name>:<tag> \\ 2 [--binary-image <registry_base_image>] 3",
"podman login <registry>",
"podman push <registry>/<namespace>/<index_image_name>:<tag>",
"opm index add --bundles <registry>/<namespace>/<new_bundle_image>@sha256:<digest> \\ 1 --from-index <registry>/<namespace>/<existing_index_image>:<existing_tag> \\ 2 --tag <registry>/<namespace>/<existing_index_image>:<updated_tag> \\ 3 --pull-tool podman 4",
"opm index add --bundles quay.io/ocs-dev/ocs-operator@sha256:c7f11097a628f092d8bad148406aa0e0951094a03445fd4bc0775431ef683a41 --from-index mirror.example.com/abc/abc-redhat-operator-index:4.17 --tag mirror.example.com/abc/abc-redhat-operator-index:4.17.1 --pull-tool podman",
"podman push <registry>/<namespace>/<existing_index_image>:<updated_tag>",
"oc get packagemanifests -n openshift-marketplace",
"podman login <target_registry>",
"podman run -p50051:50051 -it registry.redhat.io/redhat/redhat-operator-index:v4.17",
"Trying to pull registry.redhat.io/redhat/redhat-operator-index:v4.17 Getting image source signatures Copying blob ae8a0c23f5b1 done INFO[0000] serving registry database=/database/index.db port=50051",
"grpcurl -plaintext localhost:50051 api.Registry/ListPackages > packages.out",
"{ \"name\": \"advanced-cluster-management\" } { \"name\": \"jaeger-product\" } { { \"name\": \"quay-operator\" }",
"opm index prune -f registry.redhat.io/redhat/redhat-operator-index:v4.17 \\ 1 -p advanced-cluster-management,jaeger-product,quay-operator \\ 2 [-i registry.redhat.io/openshift4/ose-operator-registry-rhel9:v4.17] \\ 3 -t <target_registry>:<port>/<namespace>/redhat-operator-index:v4.17 4",
"podman push <target_registry>:<port>/<namespace>/redhat-operator-index:v4.17",
"opm migrate <registry_image> <fbc_directory>",
"opm generate dockerfile <fbc_directory> --binary-image registry.redhat.io/openshift4/ose-operator-registry-rhel9:v4.17",
"opm index add --binary-image registry.redhat.io/openshift4/ose-operator-registry-rhel9:v4.17 --from-index <your_registry_image> --bundles \"\" -t \\<your_registry_image>",
"apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-catsrc namespace: my-ns spec: sourceType: grpc grpcPodConfig: securityContextConfig: legacy image: my-image:latest",
"apiVersion: v1 kind: Namespace metadata: labels: security.openshift.io/scc.podSecurityLabelSync: \"false\" 1 openshift.io/cluster-monitoring: \"true\" pod-security.kubernetes.io/enforce: baseline 2 name: \"<namespace_name>\"",
"apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-operator-catalog namespace: openshift-marketplace 1 annotations: olm.catalogImageTemplate: 2 \"<registry>/<namespace>/<index_image_name>:v{kube_major_version}.{kube_minor_version}.{kube_patch_version}\" spec: sourceType: grpc grpcPodConfig: securityContextConfig: <security_mode> 3 image: <registry>/<namespace>/<index_image_name>:<tag> 4 displayName: My Operator Catalog publisher: <publisher_name> 5 updateStrategy: registryPoll: 6 interval: 30m",
"oc apply -f catalogSource.yaml",
"oc get pods -n openshift-marketplace",
"NAME READY STATUS RESTARTS AGE my-operator-catalog-6njx6 1/1 Running 0 28s marketplace-operator-d9f549946-96sgr 1/1 Running 0 26h",
"oc get catalogsource -n openshift-marketplace",
"NAME DISPLAY TYPE PUBLISHER AGE my-operator-catalog My Operator Catalog grpc 5s",
"oc get packagemanifest -n openshift-marketplace",
"NAME CATALOG AGE jaeger-product My Operator Catalog 93s",
"podman login <registry>:<port>",
"{ \"auths\": { \"registry.redhat.io\": { \"auth\": \"FrNHNydQXdzclNqdg==\" }, \"quay.io\": { \"auth\": \"fegdsRib21iMQ==\" }, \"https://quay.io/my-namespace/my-user/my-image\": { \"auth\": \"eWfjwsDdfsa221==\" }, \"https://quay.io/my-namespace/my-user\": { \"auth\": \"feFweDdscw34rR==\" }, \"https://quay.io/my-namespace\": { \"auth\": \"frwEews4fescyq==\" } } }",
"{ \"auths\": { \"registry.redhat.io\": { \"auth\": \"FrNHNydQXdzclNqdg==\" } } }",
"{ \"auths\": { \"quay.io\": { \"auth\": \"Xd2lhdsbnRib21iMQ==\" } } }",
"oc create secret generic <secret_name> -n openshift-marketplace --from-file=.dockerconfigjson=<path/to/registry/credentials> --type=kubernetes.io/dockerconfigjson",
"apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-operator-catalog namespace: openshift-marketplace spec: sourceType: grpc secrets: 1 - \"<secret_name_1>\" - \"<secret_name_2>\" grpcPodConfig: securityContextConfig: <security_mode> 2 image: <registry>:<port>/<namespace>/<image>:<tag> displayName: My Operator Catalog publisher: <publisher_name> updateStrategy: registryPoll: interval: 30m",
"oc extract secret/pull-secret -n openshift-config --confirm",
"cat .dockerconfigjson | jq --compact-output '.auths[\"<registry>:<port>/<namespace>/\"] |= . + {\"auth\":\"<token>\"}' \\ 1 > new_dockerconfigjson",
"oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=new_dockerconfigjson",
"oc create secret generic <secret_name> -n <tenant_namespace> --from-file=.dockerconfigjson=<path/to/registry/credentials> --type=kubernetes.io/dockerconfigjson",
"oc get sa -n <tenant_namespace> 1",
"NAME SECRETS AGE builder 2 6m1s default 2 6m1s deployer 2 6m1s etcd-operator 2 5m18s 1",
"oc secrets link <operator_sa> -n <tenant_namespace> <secret_name> --for=pull",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"oc patch operatorhub cluster -p '{\"spec\": {\"disableAllDefaultSources\": true}}' --type=merge",
"grpcPodConfig: nodeSelector: custom_label: <label>",
"grpcPodConfig: priorityClassName: <priority_class>",
"apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: example-catalog namespace: openshift-marketplace annotations: operatorframework.io/priorityclass: system-cluster-critical",
"grpcPodConfig: tolerations: - key: \"<key_name>\" operator: \"<operator_type>\" value: \"<value>\" effect: \"<effect>\"",
"oc get subs -n <operator_namespace>",
"oc describe sub <subscription_name> -n <operator_namespace>",
"Name: cluster-logging Namespace: openshift-logging Labels: operators.coreos.com/cluster-logging.openshift-logging= Annotations: <none> API Version: operators.coreos.com/v1alpha1 Kind: Subscription Conditions: Last Transition Time: 2019-07-29T13:42:57Z Message: all available catalogsources are healthy Reason: AllCatalogSourcesHealthy Status: False Type: CatalogSourcesUnhealthy",
"oc get catalogsources -n openshift-marketplace",
"NAME DISPLAY TYPE PUBLISHER AGE certified-operators Certified Operators grpc Red Hat 55m community-operators Community Operators grpc Red Hat 55m example-catalog Example Catalog grpc Example Org 2m25s redhat-marketplace Red Hat Marketplace grpc Red Hat 55m redhat-operators Red Hat Operators grpc Red Hat 55m",
"oc describe catalogsource example-catalog -n openshift-marketplace",
"Name: example-catalog Namespace: openshift-marketplace Labels: <none> Annotations: operatorframework.io/managed-by: marketplace-operator target.workload.openshift.io/management: {\"effect\": \"PreferredDuringScheduling\"} API Version: operators.coreos.com/v1alpha1 Kind: CatalogSource Status: Connection State: Address: example-catalog.openshift-marketplace.svc:50051 Last Connect: 2021-09-09T17:07:35Z Last Observed State: TRANSIENT_FAILURE Registry Service: Created At: 2021-09-09T17:05:45Z Port: 50051 Protocol: grpc Service Name: example-catalog Service Namespace: openshift-marketplace",
"oc get pods -n openshift-marketplace",
"NAME READY STATUS RESTARTS AGE certified-operators-cv9nn 1/1 Running 0 36m community-operators-6v8lp 1/1 Running 0 36m marketplace-operator-86bfc75f9b-jkgbc 1/1 Running 0 42m example-catalog-bwt8z 0/1 ImagePullBackOff 0 3m55s redhat-marketplace-57p8c 1/1 Running 0 36m redhat-operators-smxx8 1/1 Running 0 36m",
"oc describe pod example-catalog-bwt8z -n openshift-marketplace",
"Name: example-catalog-bwt8z Namespace: openshift-marketplace Priority: 0 Node: ci-ln-jyryyg2-f76d1-ggdbq-worker-b-vsxjd/10.0.128.2 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 48s default-scheduler Successfully assigned openshift-marketplace/example-catalog-bwt8z to ci-ln-jyryyf2-f76d1-fgdbq-worker-b-vsxjd Normal AddedInterface 47s multus Add eth0 [10.131.0.40/23] from openshift-sdn Normal BackOff 20s (x2 over 46s) kubelet Back-off pulling image \"quay.io/example-org/example-catalog:v1\" Warning Failed 20s (x2 over 46s) kubelet Error: ImagePullBackOff Normal Pulling 8s (x3 over 47s) kubelet Pulling image \"quay.io/example-org/example-catalog:v1\" Warning Failed 8s (x3 over 47s) kubelet Failed to pull image \"quay.io/example-org/example-catalog:v1\": rpc error: code = Unknown desc = reading manifest v1 in quay.io/example-org/example-catalog: unauthorized: access to the requested resource is not authorized Warning Failed 8s (x3 over 47s) kubelet Error: ErrImagePull",
"oc get clusteroperators",
"oc get pod -n <operator_namespace>",
"oc describe pod <operator_pod_name> -n <operator_namespace>",
"oc debug node/my-node",
"chroot /host",
"crictl ps",
"crictl ps --name network-operator",
"oc get pods -n <operator_namespace>",
"oc logs pod/<pod_name> -n <operator_namespace>",
"oc logs pod/<operator_pod_name> -c <container_name> -n <operator_namespace>",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl pods",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspectp <operator_pod_id>",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl ps --pod=<operator_pod_id>",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspect <container_id>",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id>",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool spec: paused: true 1",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool spec: paused: false 1",
"oc patch --type=merge --patch='{\"spec\":{\"paused\":true}}' machineconfigpool/master",
"oc patch --type=merge --patch='{\"spec\":{\"paused\":true}}' machineconfigpool/worker",
"oc get machineconfigpool/master --template='{{.spec.paused}}'",
"oc get machineconfigpool/worker --template='{{.spec.paused}}'",
"true",
"oc get machineconfigpool",
"NAME CONFIG UPDATED UPDATING master rendered-master-33cf0a1254318755d7b48002c597bf91 True False worker rendered-worker-e405a5bdb0db1295acea08bcca33fa60 False False",
"oc patch --type=merge --patch='{\"spec\":{\"paused\":false}}' machineconfigpool/master",
"oc patch --type=merge --patch='{\"spec\":{\"paused\":false}}' machineconfigpool/worker",
"oc get machineconfigpool/master --template='{{.spec.paused}}'",
"oc get machineconfigpool/worker --template='{{.spec.paused}}'",
"false",
"oc get machineconfigpool",
"NAME CONFIG UPDATED UPDATING master rendered-master-546383f80705bd5aeaba93 True False worker rendered-worker-b4c51bb33ccaae6fc4a6a5 False True",
"ImagePullBackOff for Back-off pulling image \"example.com/openshift4/ose-elasticsearch-operator-bundle@sha256:6d2587129c846ec28d384540322b40b05833e7e00b25cca584e004af9a1d292e\"",
"rpc error: code = Unknown desc = error pinging docker registry example.com: Get \"https://example.com/v2/\": dial tcp: lookup example.com on 10.0.0.1:53: no such host",
"oc get sub,csv -n <namespace>",
"NAME PACKAGE SOURCE CHANNEL subscription.operators.coreos.com/elasticsearch-operator elasticsearch-operator redhat-operators 5.0 NAME DISPLAY VERSION REPLACES PHASE clusterserviceversion.operators.coreos.com/elasticsearch-operator.5.0.0-65 OpenShift Elasticsearch Operator 5.0.0-65 Succeeded",
"oc delete subscription <subscription_name> -n <namespace>",
"oc delete csv <csv_name> -n <namespace>",
"oc get job,configmap -n openshift-marketplace",
"NAME COMPLETIONS DURATION AGE job.batch/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 1/1 26s 9m30s NAME DATA AGE configmap/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 3 9m30s",
"oc delete job <job_name> -n openshift-marketplace",
"oc delete configmap <configmap_name> -n openshift-marketplace",
"oc get sub,csv,installplan -n <namespace>",
"message: 'Failed to delete all resource types, 1 remaining: Internal error occurred: error resolving resource'",
"oc get namespaces",
"operator-ns-1 Terminating",
"oc get crds",
"oc delete crd <crd_name>",
"oc get EtcdCluster -n <namespace_name>",
"oc get EtcdCluster --all-namespaces",
"oc delete <cr_name> <cr_instance_name> -n <namespace_name>",
"oc get namespace <namespace_name>",
"oc get sub,csv,installplan -n <namespace>",
"tar xvf operator-sdk-v1.36.1-ocp-linux-x86_64.tar.gz",
"chmod +x operator-sdk",
"echo USDPATH",
"sudo mv ./operator-sdk /usr/local/bin/operator-sdk",
"operator-sdk version",
"operator-sdk version: \"v1.36.1-ocp\",",
"tar xvf operator-sdk-v1.36.1-ocp-darwin-x86_64.tar.gz",
"tar xvf operator-sdk-v1.36.1-ocp-darwin-aarch64.tar.gz",
"chmod +x operator-sdk",
"echo USDPATH",
"sudo mv ./operator-sdk /usr/local/bin/operator-sdk",
"operator-sdk version",
"operator-sdk version: \"v1.36.1-ocp\",",
"mkdir memcached-operator",
"cd memcached-operator",
"operator-sdk init --domain=example.com --repo=github.com/example-inc/memcached-operator",
"operator-sdk create api --resource=true --controller=true --group cache --version v1 --kind Memcached",
"make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make install",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc apply -f config/samples/cache_v1_memcached.yaml -n memcached-operator-system",
"oc logs deployment.apps/memcached-operator-controller-manager -c manager -n memcached-operator-system",
"oc delete -f config/samples/cache_v1_memcached.yaml -n memcached-operator-system",
"make undeploy",
"mkdir -p USDHOME/projects/memcached-operator",
"cd USDHOME/projects/memcached-operator",
"export GO111MODULE=on",
"operator-sdk init --domain=example.com --repo=github.com/example-inc/memcached-operator",
"domain: example.com layout: - go.kubebuilder.io/v3 projectName: memcached-operator repo: github.com/example-inc/memcached-operator version: \"3\" plugins: manifests.sdk.operatorframework.io/v2: {} scorecard.sdk.operatorframework.io/v2: {} sdk.x-openshift.io/v1: {}",
"mgr, err := ctrl.NewManager(cfg, manager.Options{Namespace: namespace})",
"mgr, err := ctrl.NewManager(cfg, manager.Options{Namespace: \"\"})",
"var namespaces []string 1 mgr, err := ctrl.NewManager(cfg, manager.Options{ 2 NewCache: cache.MultiNamespacedCacheBuilder(namespaces), })",
"operator-sdk edit --multigroup=true",
"domain: example.com layout: go.kubebuilder.io/v3 multigroup: true",
"operator-sdk create api --group=cache --version=v1 --kind=Memcached",
"Create Resource [y/n] y Create Controller [y/n] y",
"Writing scaffold for you to edit api/v1/memcached_types.go controllers/memcached_controller.go",
"// MemcachedSpec defines the desired state of Memcached type MemcachedSpec struct { // +kubebuilder:validation:Minimum=0 // Size is the size of the memcached deployment Size int32 `json:\"size\"` } // MemcachedStatus defines the observed state of Memcached type MemcachedStatus struct { // Nodes are the names of the memcached pods Nodes []string `json:\"nodes\"` }",
"make generate",
"make manifests",
"/* Copyright 2020. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package controllers import ( appsv1 \"k8s.io/api/apps/v1\" corev1 \"k8s.io/api/core/v1\" \"k8s.io/apimachinery/pkg/api/errors\" metav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\" \"k8s.io/apimachinery/pkg/types\" \"reflect\" \"context\" \"github.com/go-logr/logr\" \"k8s.io/apimachinery/pkg/runtime\" ctrl \"sigs.k8s.io/controller-runtime\" \"sigs.k8s.io/controller-runtime/pkg/client\" ctrllog \"sigs.k8s.io/controller-runtime/pkg/log\" cachev1 \"github.com/example-inc/memcached-operator/api/v1\" ) // MemcachedReconciler reconciles a Memcached object type MemcachedReconciler struct { client.Client Log logr.Logger Scheme *runtime.Scheme } // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/status,verbs=get;update;patch // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/finalizers,verbs=update // +kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=core,resources=pods,verbs=get;list; // Reconcile is part of the main kubernetes reconciliation loop which aims to // move the current state of the cluster closer to the desired state. // TODO(user): Modify the Reconcile function to compare the state specified by // the Memcached object against the actual cluster state, and then // perform operations to make the cluster state reflect the state specified by // the user. // // For more details, check Reconcile and its Result here: // - https://pkg.go.dev/sigs.k8s.io/[email protected]/pkg/reconcile func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { //log := r.Log.WithValues(\"memcached\", req.NamespacedName) log := ctrllog.FromContext(ctx) // Fetch the Memcached instance memcached := &cachev1.Memcached{} err := r.Get(ctx, req.NamespacedName, memcached) if err != nil { if errors.IsNotFound(err) { // Request object not found, could have been deleted after reconcile request. // Owned objects are automatically garbage collected. For additional cleanup logic use finalizers. // Return and don't requeue log.Info(\"Memcached resource not found. Ignoring since object must be deleted\") return ctrl.Result{}, nil } // Error reading the object - requeue the request. log.Error(err, \"Failed to get Memcached\") return ctrl.Result{}, err } // Check if the deployment already exists, if not create a new one found := &appsv1.Deployment{} err = r.Get(ctx, types.NamespacedName{Name: memcached.Name, Namespace: memcached.Namespace}, found) if err != nil && errors.IsNotFound(err) { // Define a new deployment dep := r.deploymentForMemcached(memcached) log.Info(\"Creating a new Deployment\", \"Deployment.Namespace\", dep.Namespace, \"Deployment.Name\", dep.Name) err = r.Create(ctx, dep) if err != nil { log.Error(err, \"Failed to create new Deployment\", \"Deployment.Namespace\", dep.Namespace, \"Deployment.Name\", dep.Name) return ctrl.Result{}, err } // Deployment created successfully - return and requeue return ctrl.Result{Requeue: true}, nil } else if err != nil { log.Error(err, \"Failed to get Deployment\") return ctrl.Result{}, err } // Ensure the deployment size is the same as the spec size := memcached.Spec.Size if *found.Spec.Replicas != size { found.Spec.Replicas = &size err = r.Update(ctx, found) if err != nil { log.Error(err, \"Failed to update Deployment\", \"Deployment.Namespace\", found.Namespace, \"Deployment.Name\", found.Name) return ctrl.Result{}, err } // Spec updated - return and requeue return ctrl.Result{Requeue: true}, nil } // Update the Memcached status with the pod names // List the pods for this memcached's deployment podList := &corev1.PodList{} listOpts := []client.ListOption{ client.InNamespace(memcached.Namespace), client.MatchingLabels(labelsForMemcached(memcached.Name)), } if err = r.List(ctx, podList, listOpts...); err != nil { log.Error(err, \"Failed to list pods\", \"Memcached.Namespace\", memcached.Namespace, \"Memcached.Name\", memcached.Name) return ctrl.Result{}, err } podNames := getPodNames(podList.Items) // Update status.Nodes if needed if !reflect.DeepEqual(podNames, memcached.Status.Nodes) { memcached.Status.Nodes = podNames err := r.Status().Update(ctx, memcached) if err != nil { log.Error(err, \"Failed to update Memcached status\") return ctrl.Result{}, err } } return ctrl.Result{}, nil } // deploymentForMemcached returns a memcached Deployment object func (r *MemcachedReconciler) deploymentForMemcached(m *cachev1.Memcached) *appsv1.Deployment { ls := labelsForMemcached(m.Name) replicas := m.Spec.Size dep := &appsv1.Deployment{ ObjectMeta: metav1.ObjectMeta{ Name: m.Name, Namespace: m.Namespace, }, Spec: appsv1.DeploymentSpec{ Replicas: &replicas, Selector: &metav1.LabelSelector{ MatchLabels: ls, }, Template: corev1.PodTemplateSpec{ ObjectMeta: metav1.ObjectMeta{ Labels: ls, }, Spec: corev1.PodSpec{ Containers: []corev1.Container{{ Image: \"memcached:1.4.36-alpine\", Name: \"memcached\", Command: []string{\"memcached\", \"-m=64\", \"-o\", \"modern\", \"-v\"}, Ports: []corev1.ContainerPort{{ ContainerPort: 11211, Name: \"memcached\", }}, }}, }, }, }, } // Set Memcached instance as the owner and controller ctrl.SetControllerReference(m, dep, r.Scheme) return dep } // labelsForMemcached returns the labels for selecting the resources // belonging to the given memcached CR name. func labelsForMemcached(name string) map[string]string { return map[string]string{\"app\": \"memcached\", \"memcached_cr\": name} } // getPodNames returns the pod names of the array of pods passed in func getPodNames(pods []corev1.Pod) []string { var podNames []string for _, pod := range pods { podNames = append(podNames, pod.Name) } return podNames } // SetupWithManager sets up the controller with the Manager. func (r *MemcachedReconciler) SetupWithManager(mgr ctrl.Manager) error { return ctrl.NewControllerManagedBy(mgr). For(&cachev1.Memcached{}). Owns(&appsv1.Deployment{}). Complete(r) }",
"import ( appsv1 \"k8s.io/api/apps/v1\" ) func (r *MemcachedReconciler) SetupWithManager(mgr ctrl.Manager) error { return ctrl.NewControllerManagedBy(mgr). For(&cachev1.Memcached{}). Owns(&appsv1.Deployment{}). Complete(r) }",
"func (r *MemcachedReconciler) SetupWithManager(mgr ctrl.Manager) error { return ctrl.NewControllerManagedBy(mgr). For(&cachev1.Memcached{}). Owns(&appsv1.Deployment{}). WithOptions(controller.Options{ MaxConcurrentReconciles: 2, }). Complete(r) }",
"import ( ctrl \"sigs.k8s.io/controller-runtime\" cachev1 \"github.com/example-inc/memcached-operator/api/v1\" ) func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { // Lookup the Memcached instance for this reconcile request memcached := &cachev1.Memcached{} err := r.Get(ctx, req.NamespacedName, memcached) }",
"// Reconcile successful - don't requeue return ctrl.Result{}, nil // Reconcile failed due to error - requeue return ctrl.Result{}, err // Requeue for any reason other than an error return ctrl.Result{Requeue: true}, nil",
"import \"time\" // Reconcile for any reason other than an error after 5 seconds return ctrl.Result{RequeueAfter: time.Second*5}, nil",
"// +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/status,verbs=get;update;patch // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/finalizers,verbs=update // +kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=core,resources=pods,verbs=get;list; func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { }",
"import ( \"github.com/operator-framework/operator-lib/proxy\" )",
"for i, container := range dep.Spec.Template.Spec.Containers { dep.Spec.Template.Spec.Containers[i].Env = append(container.Env, proxy.ReadProxyVarsFromEnv()...) }",
"containers: - args: - --leader-elect - --leader-election-id=ansible-proxy-demo image: controller:latest name: manager env: - name: \"HTTP_PROXY\" value: \"http_proxy_test\"",
"make install run",
"2021-01-10T21:09:29.016-0700 INFO controller-runtime.metrics metrics server is starting to listen {\"addr\": \":8080\"} 2021-01-10T21:09:29.017-0700 INFO setup starting manager 2021-01-10T21:09:29.017-0700 INFO controller-runtime.manager starting metrics server {\"path\": \"/metrics\"} 2021-01-10T21:09:29.018-0700 INFO controller-runtime.manager.controller.memcached Starting EventSource {\"reconciler group\": \"cache.example.com\", \"reconciler kind\": \"Memcached\", \"source\": \"kind source: /, Kind=\"} 2021-01-10T21:09:29.218-0700 INFO controller-runtime.manager.controller.memcached Starting Controller {\"reconciler group\": \"cache.example.com\", \"reconciler kind\": \"Memcached\"} 2021-01-10T21:09:29.218-0700 INFO controller-runtime.manager.controller.memcached Starting workers {\"reconciler group\": \"cache.example.com\", \"reconciler kind\": \"Memcached\", \"worker count\": 1}",
"make docker-build IMG=<registry>/<user>/<image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc get deployment -n <project_name>-system",
"NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m",
"make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>",
"docker push <registry>/<user>/<bundle_image_name>:<tag>",
"operator-sdk run bundle \\ 1 -n <namespace> \\ 2 <registry>/<user>/<bundle_image_name>:<tag> 3",
"oc project memcached-operator-system",
"apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: size: 3",
"oc apply -f config/samples/cache_v1_memcached.yaml",
"oc get deployments",
"NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 8m memcached-sample 3/3 3 3 1m",
"oc get pods",
"NAME READY STATUS RESTARTS AGE memcached-sample-6fd7c98d8-7dqdr 1/1 Running 0 1m memcached-sample-6fd7c98d8-g5k7v 1/1 Running 0 1m memcached-sample-6fd7c98d8-m7vn7 1/1 Running 0 1m",
"oc get memcached/memcached-sample -o yaml",
"apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: size: 3 status: nodes: - memcached-sample-6fd7c98d8-7dqdr - memcached-sample-6fd7c98d8-g5k7v - memcached-sample-6fd7c98d8-m7vn7",
"oc patch memcached memcached-sample -p '{\"spec\":{\"size\": 5}}' --type=merge",
"oc get deployments",
"NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 10m memcached-sample 5/5 5 5 3m",
"oc delete -f config/samples/cache_v1_memcached.yaml",
"make undeploy",
"operator-sdk cleanup <project_name>",
"Set the Operator SDK version to use. By default, what is installed on the system is used. This is useful for CI or a project to utilize a specific version of the operator-sdk toolkit. OPERATOR_SDK_VERSION ?= v1.36.1-ocp",
"containers: - name: kube-rbac-proxy image: registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9:v4.17",
"docker-buildx: ## Build and push the Docker image for the manager for multi-platform support - docker buildx create --name project-v3-builder docker buildx use project-v3-builder - docker buildx build --push --platform=USD(PLATFORMS) --tag USD{IMG} -f Dockerfile . - docker buildx rm project-v3-builder",
"k8s.io/api v0.29.2 k8s.io/apimachinery v0.29.2 k8s.io/client-go v0.29.2 sigs.k8s.io/controller-runtime v0.17.3",
"go mod tidy",
"+ .PHONY: build-installer + build-installer: manifests generate kustomize ## Generate a consolidated YAML with CRDs and deployment. + mkdir -p dist + cd config/manager && USD(KUSTOMIZE) edit set image controller=USD{IMG} + USD(KUSTOMIZE) build config/default > dist/install.yaml",
"- ENVTEST_K8S_VERSION = 1.28.3 + ENVTEST_K8S_VERSION = 1.29.0",
"- GOLANGCI_LINT = USD(shell pwd)/bin/golangci-lint - GOLANGCI_LINT_VERSION ?= v1.54.2 - golangci-lint: - @[ -f USD(GOLANGCI_LINT) ] || { - set -e ; - curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b USD(shell dirname USD(GOLANGCI_LINT)) USD(GOLANGCI_LINT_VERSION) ; - }",
"- ## Tool Binaries - KUBECTL ?= kubectl - KUSTOMIZE ?= USD(LOCALBIN)/kustomize - CONTROLLER_GEN ?= USD(LOCALBIN)/controller-gen - ENVTEST ?= USD(LOCALBIN)/setup-envtest - - ## Tool Versions - KUSTOMIZE_VERSION ?= v5.2.1 - CONTROLLER_TOOLS_VERSION ?= v0.13.0 - - .PHONY: kustomize - kustomize: USD(KUSTOMIZE) ## Download kustomize locally if necessary. If wrong version is installed, it will be removed before downloading. - USD(KUSTOMIZE): USD(LOCALBIN) - @if test -x USD(LOCALBIN)/kustomize && ! USD(LOCALBIN)/kustomize version | grep -q USD(KUSTOMIZE_VERSION); then - echo \"USD(LOCALBIN)/kustomize version is not expected USD(KUSTOMIZE_VERSION). Removing it before installing.\"; - rm -rf USD(LOCALBIN)/kustomize; - fi - test -s USD(LOCALBIN)/kustomize || GOBIN=USD(LOCALBIN) GO111MODULE=on go install sigs.k8s.io/kustomize/kustomize/v5@USD(KUSTOMIZE_VERSION) - - .PHONY: controller-gen - controller-gen: USD(CONTROLLER_GEN) ## Download controller-gen locally if necessary. If wrong version is installed, it will be overwritten. - USD(CONTROLLER_GEN): USD(LOCALBIN) - test -s USD(LOCALBIN)/controller-gen && USD(LOCALBIN)/controller-gen --version | grep -q USD(CONTROLLER_TOOLS_VERSION) || - GOBIN=USD(LOCALBIN) go install sigs.k8s.io/controller-tools/cmd/controller-gen@USD(CONTROLLER_TOOLS_VERSION) - - .PHONY: envtest - envtest: USD(ENVTEST) ## Download envtest-setup locally if necessary. - USD(ENVTEST): USD(LOCALBIN) - test -s USD(LOCALBIN)/setup-envtest || GOBIN=USD(LOCALBIN) go install sigs.k8s.io/controller-runtime/tools/setup-envtest@latest + ## Tool Binaries + KUBECTL ?= kubectl + KUSTOMIZE ?= USD(LOCALBIN)/kustomize-USD(KUSTOMIZE_VERSION) + CONTROLLER_GEN ?= USD(LOCALBIN)/controller-gen-USD(CONTROLLER_TOOLS_VERSION) + ENVTEST ?= USD(LOCALBIN)/setup-envtest-USD(ENVTEST_VERSION) + GOLANGCI_LINT = USD(LOCALBIN)/golangci-lint-USD(GOLANGCI_LINT_VERSION) + + ## Tool Versions + KUSTOMIZE_VERSION ?= v5.3.0 + CONTROLLER_TOOLS_VERSION ?= v0.14.0 + ENVTEST_VERSION ?= release-0.17 + GOLANGCI_LINT_VERSION ?= v1.57.2 + + .PHONY: kustomize + kustomize: USD(KUSTOMIZE) ## Download kustomize locally if necessary. + USD(KUSTOMIZE): USD(LOCALBIN) + USD(call go-install-tool,USD(KUSTOMIZE),sigs.k8s.io/kustomize/kustomize/v5,USD(KUSTOMIZE_VERSION)) + + .PHONY: controller-gen + controller-gen: USD(CONTROLLER_GEN) ## Download controller-gen locally if necessary. + USD(CONTROLLER_GEN): USD(LOCALBIN) + USD(call go-install-tool,USD(CONTROLLER_GEN),sigs.k8s.io/controller-tools/cmd/controller-gen,USD(CONTROLLER_TOOLS_VERSION)) + + .PHONY: envtest + envtest: USD(ENVTEST) ## Download setup-envtest locally if necessary. + USD(ENVTEST): USD(LOCALBIN) + USD(call go-install-tool,USD(ENVTEST),sigs.k8s.io/controller-runtime/tools/setup-envtest,USD(ENVTEST_VERSION)) + + .PHONY: golangci-lint + golangci-lint: USD(GOLANGCI_LINT) ## Download golangci-lint locally if necessary. + USD(GOLANGCI_LINT): USD(LOCALBIN) + USD(call go-install-tool,USD(GOLANGCI_LINT),github.com/golangci/golangci-lint/cmd/golangci-lint,USD{GOLANGCI_LINT_VERSION}) + + # go-install-tool will 'go install' any package with custom target and name of binary, if it doesn't exist + # USD1 - target path with name of binary (ideally with version) + # USD2 - package url which can be installed + # USD3 - specific version of package + define go-install-tool + @[ -f USD(1) ] || { + set -e; + package=USD(2)@USD(3) ; + echo \"Downloading USDUSD{package}\" ; + GOBIN=USD(LOCALBIN) go install USDUSD{package} ; + mv \"USDUSD(echo \"USD(1)\" | sed \"s/-USD(3)USDUSD//\")\" USD(1) ; + } + endef",
"mkdir memcached-operator",
"cd memcached-operator",
"operator-sdk init --plugins=ansible --domain=example.com",
"operator-sdk create api --group cache --version v1 --kind Memcached --generate-role 1",
"make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make install",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc apply -f config/samples/cache_v1_memcached.yaml -n memcached-operator-system",
"oc logs deployment.apps/memcached-operator-controller-manager -c manager -n memcached-operator-system",
"I0205 17:48:45.881666 7 leaderelection.go:253] successfully acquired lease memcached-operator-system/memcached-operator {\"level\":\"info\",\"ts\":1612547325.8819902,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting EventSource\",\"source\":\"kind source: cache.example.com/v1, Kind=Memcached\"} {\"level\":\"info\",\"ts\":1612547325.98242,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting Controller\"} {\"level\":\"info\",\"ts\":1612547325.9824686,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting workers\",\"worker count\":4} {\"level\":\"info\",\"ts\":1612547348.8311093,\"logger\":\"runner\",\"msg\":\"Ansible-runner exited successfully\",\"job\":\"4037200794235010051\",\"name\":\"memcached-sample\",\"namespace\":\"memcached-operator-system\"}",
"oc delete -f config/samples/cache_v1_memcached.yaml -n memcached-operator-system",
"make undeploy",
"mkdir -p USDHOME/projects/memcached-operator",
"cd USDHOME/projects/memcached-operator",
"operator-sdk init --plugins=ansible --domain=example.com",
"domain: example.com layout: - ansible.sdk.operatorframework.io/v1 plugins: manifests.sdk.operatorframework.io/v2: {} scorecard.sdk.operatorframework.io/v2: {} sdk.x-openshift.io/v1: {} projectName: memcached-operator version: \"3\"",
"operator-sdk create api --group cache --version v1 --kind Memcached --generate-role 1",
"--- - name: start memcached k8s: definition: kind: Deployment apiVersion: apps/v1 metadata: name: '{{ ansible_operator_meta.name }}-memcached' namespace: '{{ ansible_operator_meta.namespace }}' spec: replicas: \"{{size}}\" selector: matchLabels: app: memcached template: metadata: labels: app: memcached spec: containers: - name: memcached command: - memcached - -m=64 - -o - modern - -v image: \"docker.io/memcached:1.4.36-alpine\" ports: - containerPort: 11211",
"--- defaults file for Memcached size: 1",
"apiVersion: cache.example.com/v1 kind: Memcached metadata: labels: app.kubernetes.io/name: memcached app.kubernetes.io/instance: memcached-sample app.kubernetes.io/part-of: memcached-operator app.kubernetes.io/managed-by: kustomize app.kubernetes.io/created-by: memcached-operator name: memcached-sample spec: size: 3",
"env: - name: HTTP_PROXY value: '{{ lookup(\"env\", \"HTTP_PROXY\") | default(\"\", True) }}' - name: http_proxy value: '{{ lookup(\"env\", \"HTTP_PROXY\") | default(\"\", True) }}'",
"containers: - args: - --leader-elect - --leader-election-id=ansible-proxy-demo image: controller:latest name: manager env: - name: \"HTTP_PROXY\" value: \"http_proxy_test\"",
"make install run",
"{\"level\":\"info\",\"ts\":1612589622.7888272,\"logger\":\"ansible-controller\",\"msg\":\"Watching resource\",\"Options.Group\":\"cache.example.com\",\"Options.Version\":\"v1\",\"Options.Kind\":\"Memcached\"} {\"level\":\"info\",\"ts\":1612589622.7897573,\"logger\":\"proxy\",\"msg\":\"Starting to serve\",\"Address\":\"127.0.0.1:8888\"} {\"level\":\"info\",\"ts\":1612589622.789971,\"logger\":\"controller-runtime.manager\",\"msg\":\"starting metrics server\",\"path\":\"/metrics\"} {\"level\":\"info\",\"ts\":1612589622.7899997,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting EventSource\",\"source\":\"kind source: cache.example.com/v1, Kind=Memcached\"} {\"level\":\"info\",\"ts\":1612589622.8904517,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting Controller\"} {\"level\":\"info\",\"ts\":1612589622.8905244,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting workers\",\"worker count\":8}",
"make docker-build IMG=<registry>/<user>/<image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc get deployment -n <project_name>-system",
"NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m",
"make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>",
"docker push <registry>/<user>/<bundle_image_name>:<tag>",
"operator-sdk run bundle \\ 1 -n <namespace> \\ 2 <registry>/<user>/<bundle_image_name>:<tag> 3",
"oc project memcached-operator-system",
"apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: size: 3",
"oc apply -f config/samples/cache_v1_memcached.yaml",
"oc get deployments",
"NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 8m memcached-sample 3/3 3 3 1m",
"oc get pods",
"NAME READY STATUS RESTARTS AGE memcached-sample-6fd7c98d8-7dqdr 1/1 Running 0 1m memcached-sample-6fd7c98d8-g5k7v 1/1 Running 0 1m memcached-sample-6fd7c98d8-m7vn7 1/1 Running 0 1m",
"oc get memcached/memcached-sample -o yaml",
"apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: size: 3 status: nodes: - memcached-sample-6fd7c98d8-7dqdr - memcached-sample-6fd7c98d8-g5k7v - memcached-sample-6fd7c98d8-m7vn7",
"oc patch memcached memcached-sample -p '{\"spec\":{\"size\": 5}}' --type=merge",
"oc get deployments",
"NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 10m memcached-sample 5/5 5 5 3m",
"oc delete -f config/samples/cache_v1_memcached.yaml",
"make undeploy",
"operator-sdk cleanup <project_name>",
"Set the Operator SDK version to use. By default, what is installed on the system is used. This is useful for CI or a project to utilize a specific version of the operator-sdk toolkit. OPERATOR_SDK_VERSION ?= v1.36.1-ocp",
"containers: - name: kube-rbac-proxy image: registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9:v4.17",
"FROM registry.redhat.io/openshift4/ose-ansible-rhel9-operator:v4.17",
"docker-buildx: ## Build and push the Docker image for the manager for multi-platform support - docker buildx create --name project-v3-builder docker buildx use project-v3-builder - docker buildx build --push --platform=USD(PLATFORMS) --tag USD{IMG} -f Dockerfile . - docker buildx rm project-v3-builder",
"k8s.io/api v0.29.2 k8s.io/apimachinery v0.29.2 k8s.io/client-go v0.29.2 sigs.k8s.io/controller-runtime v0.17.3",
"go mod tidy",
"apiVersion: \"test1.example.com/v1alpha1\" kind: \"Test1\" metadata: name: \"example\" annotations: ansible.operator-sdk/reconcile-period: \"30s\"",
"- version: v1alpha1 1 group: test1.example.com kind: Test1 role: /opt/ansible/roles/Test1 - version: v1alpha1 2 group: test2.example.com kind: Test2 playbook: /opt/ansible/playbook.yml - version: v1alpha1 3 group: test3.example.com kind: Test3 playbook: /opt/ansible/test3.yml reconcilePeriod: 0 manageStatus: false",
"- version: v1alpha1 group: app.example.com kind: AppService playbook: /opt/ansible/playbook.yml maxRunnerArtifacts: 30 reconcilePeriod: 5s manageStatus: False watchDependentResources: False",
"apiVersion: \"app.example.com/v1alpha1\" kind: \"Database\" metadata: name: \"example\" spec: message: \"Hello world 2\" newParameter: \"newParam\"",
"{ \"meta\": { \"name\": \"<cr_name>\", \"namespace\": \"<cr_namespace>\", }, \"message\": \"Hello world 2\", \"new_parameter\": \"newParam\", \"_app_example_com_database\": { <full_crd> }, }",
"--- - debug: msg: \"name: {{ ansible_operator_meta.name }}, {{ ansible_operator_meta.namespace }}\"",
"sudo dnf install ansible",
"pip install kubernetes",
"ansible-galaxy collection install community.kubernetes",
"ansible-galaxy collection install -r requirements.yml",
"--- - name: set ConfigMap example-config to {{ state }} community.kubernetes.k8s: api_version: v1 kind: ConfigMap name: example-config namespace: <operator_namespace> 1 state: \"{{ state }}\" ignore_errors: true 2",
"--- state: present",
"--- - hosts: localhost roles: - <kind>",
"ansible-playbook playbook.yml",
"[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' PLAY [localhost] ******************************************************************************** TASK [Gathering Facts] ******************************************************************************** ok: [localhost] TASK [memcached : set ConfigMap example-config to present] ******************************************************************************** changed: [localhost] PLAY RECAP ******************************************************************************** localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0",
"oc get configmaps",
"NAME DATA AGE example-config 0 2m1s",
"ansible-playbook playbook.yml --extra-vars state=absent",
"[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' PLAY [localhost] ******************************************************************************** TASK [Gathering Facts] ******************************************************************************** ok: [localhost] TASK [memcached : set ConfigMap example-config to absent] ******************************************************************************** changed: [localhost] PLAY RECAP ******************************************************************************** localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0",
"oc get configmaps",
"apiVersion: \"test1.example.com/v1alpha1\" kind: \"Test1\" metadata: name: \"example\" annotations: ansible.operator-sdk/reconcile-period: \"30s\"",
"make install",
"/usr/bin/kustomize build config/crd | kubectl apply -f - customresourcedefinition.apiextensions.k8s.io/memcacheds.cache.example.com created",
"make run",
"/home/user/memcached-operator/bin/ansible-operator run {\"level\":\"info\",\"ts\":1612739145.2871568,\"logger\":\"cmd\",\"msg\":\"Version\",\"Go Version\":\"go1.15.5\",\"GOOS\":\"linux\",\"GOARCH\":\"amd64\",\"ansible-operator\":\"v1.10.1\",\"commit\":\"1abf57985b43bf6a59dcd18147b3c574fa57d3f6\"} {\"level\":\"info\",\"ts\":1612739148.347306,\"logger\":\"controller-runtime.metrics\",\"msg\":\"metrics server is starting to listen\",\"addr\":\":8080\"} {\"level\":\"info\",\"ts\":1612739148.3488882,\"logger\":\"watches\",\"msg\":\"Environment variable not set; using default value\",\"envVar\":\"ANSIBLE_VERBOSITY_MEMCACHED_CACHE_EXAMPLE_COM\",\"default\":2} {\"level\":\"info\",\"ts\":1612739148.3490262,\"logger\":\"cmd\",\"msg\":\"Environment variable not set; using default value\",\"Namespace\":\"\",\"envVar\":\"ANSIBLE_DEBUG_LOGS\",\"ANSIBLE_DEBUG_LOGS\":false} {\"level\":\"info\",\"ts\":1612739148.3490646,\"logger\":\"ansible-controller\",\"msg\":\"Watching resource\",\"Options.Group\":\"cache.example.com\",\"Options.Version\":\"v1\",\"Options.Kind\":\"Memcached\"} {\"level\":\"info\",\"ts\":1612739148.350217,\"logger\":\"proxy\",\"msg\":\"Starting to serve\",\"Address\":\"127.0.0.1:8888\"} {\"level\":\"info\",\"ts\":1612739148.3506632,\"logger\":\"controller-runtime.manager\",\"msg\":\"starting metrics server\",\"path\":\"/metrics\"} {\"level\":\"info\",\"ts\":1612739148.350784,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting EventSource\",\"source\":\"kind source: cache.example.com/v1, Kind=Memcached\"} {\"level\":\"info\",\"ts\":1612739148.5511978,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting Controller\"} {\"level\":\"info\",\"ts\":1612739148.5512562,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting workers\",\"worker count\":8}",
"apiVersion: <group>.example.com/v1alpha1 kind: <kind> metadata: name: \"<kind>-sample\"",
"oc apply -f config/samples/<gvk>.yaml",
"oc get configmaps",
"NAME STATUS AGE example-config Active 3s",
"apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: state: absent",
"oc apply -f config/samples/<gvk>.yaml",
"oc get configmap",
"make docker-build IMG=<registry>/<user>/<image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc get deployment -n <project_name>-system",
"NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m",
"oc logs deployment/<project_name>-controller-manager -c manager \\ 1 -n <namespace> 2",
"{\"level\":\"info\",\"ts\":1612732105.0579333,\"logger\":\"cmd\",\"msg\":\"Version\",\"Go Version\":\"go1.15.5\",\"GOOS\":\"linux\",\"GOARCH\":\"amd64\",\"ansible-operator\":\"v1.10.1\",\"commit\":\"1abf57985b43bf6a59dcd18147b3c574fa57d3f6\"} {\"level\":\"info\",\"ts\":1612732105.0587437,\"logger\":\"cmd\",\"msg\":\"WATCH_NAMESPACE environment variable not set. Watching all namespaces.\",\"Namespace\":\"\"} I0207 21:08:26.110949 7 request.go:645] Throttling request took 1.035521578s, request: GET:https://172.30.0.1:443/apis/flowcontrol.apiserver.k8s.io/v1alpha1?timeout=32s {\"level\":\"info\",\"ts\":1612732107.768025,\"logger\":\"controller-runtime.metrics\",\"msg\":\"metrics server is starting to listen\",\"addr\":\"127.0.0.1:8080\"} {\"level\":\"info\",\"ts\":1612732107.768796,\"logger\":\"watches\",\"msg\":\"Environment variable not set; using default value\",\"envVar\":\"ANSIBLE_VERBOSITY_MEMCACHED_CACHE_EXAMPLE_COM\",\"default\":2} {\"level\":\"info\",\"ts\":1612732107.7688773,\"logger\":\"cmd\",\"msg\":\"Environment variable not set; using default value\",\"Namespace\":\"\",\"envVar\":\"ANSIBLE_DEBUG_LOGS\",\"ANSIBLE_DEBUG_LOGS\":false} {\"level\":\"info\",\"ts\":1612732107.7688901,\"logger\":\"ansible-controller\",\"msg\":\"Watching resource\",\"Options.Group\":\"cache.example.com\",\"Options.Version\":\"v1\",\"Options.Kind\":\"Memcached\"} {\"level\":\"info\",\"ts\":1612732107.770032,\"logger\":\"proxy\",\"msg\":\"Starting to serve\",\"Address\":\"127.0.0.1:8888\"} I0207 21:08:27.770185 7 leaderelection.go:243] attempting to acquire leader lease memcached-operator-system/memcached-operator {\"level\":\"info\",\"ts\":1612732107.770202,\"logger\":\"controller-runtime.manager\",\"msg\":\"starting metrics server\",\"path\":\"/metrics\"} I0207 21:08:27.784854 7 leaderelection.go:253] successfully acquired lease memcached-operator-system/memcached-operator {\"level\":\"info\",\"ts\":1612732107.7850506,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting EventSource\",\"source\":\"kind source: cache.example.com/v1, Kind=Memcached\"} {\"level\":\"info\",\"ts\":1612732107.8853772,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting Controller\"} {\"level\":\"info\",\"ts\":1612732107.8854098,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting workers\",\"worker count\":4}",
"containers: - name: manager env: - name: ANSIBLE_DEBUG_LOGS value: \"True\"",
"apiVersion: \"cache.example.com/v1alpha1\" kind: \"Memcached\" metadata: name: \"example-memcached\" annotations: \"ansible.sdk.operatorframework.io/verbosity\": \"4\" spec: size: 4",
"status: conditions: - ansibleResult: changed: 3 completion: 2018-12-03T13:45:57.13329 failures: 1 ok: 6 skipped: 0 lastTransitionTime: 2018-12-03T13:45:57Z message: 'Status code was -1 and not [200]: Request failed: <urlopen error [Errno 113] No route to host>' reason: Failed status: \"True\" type: Failure - lastTransitionTime: 2018-12-03T13:46:13Z message: Running reconciliation reason: Running status: \"True\" type: Running",
"- version: v1 group: api.example.com kind: <kind> role: <role> manageStatus: false",
"- operator_sdk.util.k8s_status: api_version: app.example.com/v1 kind: <kind> name: \"{{ ansible_operator_meta.name }}\" namespace: \"{{ ansible_operator_meta.namespace }}\" status: test: data",
"collections: - operator_sdk.util",
"k8s_status: status: key1: value1",
"mkdir nginx-operator",
"cd nginx-operator",
"operator-sdk init --plugins=helm",
"operator-sdk create api --group demo --version v1 --kind Nginx",
"make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make install",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc adm policy add-scc-to-user anyuid system:serviceaccount:nginx-operator-system:nginx-sample",
"oc apply -f config/samples/demo_v1_nginx.yaml -n nginx-operator-system",
"oc logs deployment.apps/nginx-operator-controller-manager -c manager -n nginx-operator-system",
"oc delete -f config/samples/demo_v1_nginx.yaml -n nginx-operator-system",
"make undeploy",
"mkdir -p USDHOME/projects/nginx-operator",
"cd USDHOME/projects/nginx-operator",
"operator-sdk init --plugins=helm --domain=example.com --group=demo --version=v1 --kind=Nginx",
"operator-sdk init --plugins helm --help",
"domain: example.com layout: - helm.sdk.operatorframework.io/v1 plugins: manifests.sdk.operatorframework.io/v2: {} scorecard.sdk.operatorframework.io/v2: {} sdk.x-openshift.io/v1: {} projectName: nginx-operator resources: - api: crdVersion: v1 namespaced: true domain: example.com group: demo kind: Nginx version: v1 version: \"3\"",
"Use the 'create api' subcommand to add watches to this file. - group: demo version: v1 kind: Nginx chart: helm-charts/nginx +kubebuilder:scaffold:watch",
"apiVersion: demo.example.com/v1 kind: Nginx metadata: name: nginx-sample spec: replicaCount: 2",
"apiVersion: demo.example.com/v1 kind: Nginx metadata: name: nginx-sample spec: replicaCount: 2 service: port: 8080",
"- group: demo.example.com version: v1alpha1 kind: Nginx chart: helm-charts/nginx overrideValues: proxy.http: USDHTTP_PROXY",
"proxy: http: \"\" https: \"\" no_proxy: \"\"",
"containers: - name: {{ .Chart.Name }} securityContext: - toYaml {{ .Values.securityContext | nindent 12 }} image: \"{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}\" imagePullPolicy: {{ .Values.image.pullPolicy }} env: - name: http_proxy value: \"{{ .Values.proxy.http }}\"",
"containers: - args: - --leader-elect - --leader-election-id=ansible-proxy-demo image: controller:latest name: manager env: - name: \"HTTP_PROXY\" value: \"http_proxy_test\"",
"make install run",
"{\"level\":\"info\",\"ts\":1612652419.9289865,\"logger\":\"controller-runtime.metrics\",\"msg\":\"metrics server is starting to listen\",\"addr\":\":8080\"} {\"level\":\"info\",\"ts\":1612652419.9296563,\"logger\":\"helm.controller\",\"msg\":\"Watching resource\",\"apiVersion\":\"demo.example.com/v1\",\"kind\":\"Nginx\",\"namespace\":\"\",\"reconcilePeriod\":\"1m0s\"} {\"level\":\"info\",\"ts\":1612652419.929983,\"logger\":\"controller-runtime.manager\",\"msg\":\"starting metrics server\",\"path\":\"/metrics\"} {\"level\":\"info\",\"ts\":1612652419.930015,\"logger\":\"controller-runtime.manager.controller.nginx-controller\",\"msg\":\"Starting EventSource\",\"source\":\"kind source: demo.example.com/v1, Kind=Nginx\"} {\"level\":\"info\",\"ts\":1612652420.2307851,\"logger\":\"controller-runtime.manager.controller.nginx-controller\",\"msg\":\"Starting Controller\"} {\"level\":\"info\",\"ts\":1612652420.2309358,\"logger\":\"controller-runtime.manager.controller.nginx-controller\",\"msg\":\"Starting workers\",\"worker count\":8}",
"make docker-build IMG=<registry>/<user>/<image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc get deployment -n <project_name>-system",
"NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m",
"make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>",
"docker push <registry>/<user>/<bundle_image_name>:<tag>",
"operator-sdk run bundle \\ 1 -n <namespace> \\ 2 <registry>/<user>/<bundle_image_name>:<tag> 3",
"oc project nginx-operator-system",
"apiVersion: demo.example.com/v1 kind: Nginx metadata: name: nginx-sample spec: replicaCount: 3",
"oc adm policy add-scc-to-user anyuid system:serviceaccount:nginx-operator-system:nginx-sample",
"oc apply -f config/samples/demo_v1_nginx.yaml",
"oc get deployments",
"NAME READY UP-TO-DATE AVAILABLE AGE nginx-operator-controller-manager 1/1 1 1 8m nginx-sample 3/3 3 3 1m",
"oc get pods",
"NAME READY STATUS RESTARTS AGE nginx-sample-6fd7c98d8-7dqdr 1/1 Running 0 1m nginx-sample-6fd7c98d8-g5k7v 1/1 Running 0 1m nginx-sample-6fd7c98d8-m7vn7 1/1 Running 0 1m",
"oc get nginx/nginx-sample -o yaml",
"apiVersion: demo.example.com/v1 kind: Nginx metadata: name: nginx-sample spec: replicaCount: 3 status: nodes: - nginx-sample-6fd7c98d8-7dqdr - nginx-sample-6fd7c98d8-g5k7v - nginx-sample-6fd7c98d8-m7vn7",
"oc patch nginx nginx-sample -p '{\"spec\":{\"replicaCount\": 5}}' --type=merge",
"oc get deployments",
"NAME READY UP-TO-DATE AVAILABLE AGE nginx-operator-controller-manager 1/1 1 1 10m nginx-sample 5/5 5 5 3m",
"oc delete -f config/samples/demo_v1_nginx.yaml",
"make undeploy",
"operator-sdk cleanup <project_name>",
"Set the Operator SDK version to use. By default, what is installed on the system is used. This is useful for CI or a project to utilize a specific version of the operator-sdk toolkit. OPERATOR_SDK_VERSION ?= v1.36.1-ocp",
"containers: - name: kube-rbac-proxy image: registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9:v4.17",
"FROM registry.redhat.io/openshift4/ose-helm-rhel9-operator:v4.17",
"docker-buildx: ## Build and push the Docker image for the manager for multi-platform support - docker buildx create --name project-v3-builder docker buildx use project-v3-builder - docker buildx build --push --platform=USD(PLATFORMS) --tag USD{IMG} -f Dockerfile . - docker buildx rm project-v3-builder",
"k8s.io/api v0.29.2 k8s.io/apimachinery v0.29.2 k8s.io/client-go v0.29.2 sigs.k8s.io/controller-runtime v0.17.3",
"go mod tidy",
"- curl -sSLo - https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize/v5.2.1/kustomize_v5.2.1_USD(OS)_USD(ARCH).tar.gz | + curl -sSLo - https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize/v5.3.0/kustomize_v5.3.0_USD(OS)_USD(ARCH).tar.gz | \\",
"apiVersion: apache.org/v1alpha1 kind: Tomcat metadata: name: example-app spec: replicaCount: 2",
"{{ .Values.replicaCount }}",
"oc get Tomcats --all-namespaces",
"mkdir -p USDHOME/github.com/example/memcached-operator",
"cd USDHOME/github.com/example/memcached-operator",
"operator-sdk init --plugins=hybrid.helm.sdk.operatorframework.io --project-version=\"3\" --domain my.domain --repo=github.com/example/memcached-operator",
"operator-sdk create api --plugins helm.sdk.operatorframework.io/v1 --group cache --version v1 --kind Memcached",
"operator-sdk create api --plugins helm.sdk.operatorframework.io/v1 --help",
"Use the 'create api' subcommand to add watches to this file. - group: cache.my.domain version: v1 kind: Memcached chart: helm-charts/memcached #+kubebuilder:scaffold:watch",
"// Operator's main.go // With the help of helpers provided in the library, the reconciler can be // configured here before starting the controller with this reconciler. reconciler := reconciler.New( reconciler.WithChart(*chart), reconciler.WithGroupVersionKind(gvk), ) if err := reconciler.SetupWithManager(mgr); err != nil { panic(fmt.Sprintf(\"unable to create reconciler: %s\", err)) }",
"operator-sdk create api --group=cache --version v1 --kind MemcachedBackup --resource --controller --plugins=go/v4",
"Create Resource [y/n] y Create Controller [y/n] y",
"// MemcachedBackupSpec defines the desired state of MemcachedBackup type MemcachedBackupSpec struct { // INSERT ADDITIONAL SPEC FIELDS - desired state of cluster // Important: Run \"make\" to regenerate code after modifying this file //+kubebuilder:validation:Minimum=0 // Size is the size of the memcached deployment Size int32 `json:\"size\"` } // MemcachedBackupStatus defines the observed state of MemcachedBackup type MemcachedBackupStatus struct { // INSERT ADDITIONAL STATUS FIELD - define observed state of cluster // Important: Run \"make\" to regenerate code after modifying this file // Nodes are the names of the memcached pods Nodes []string `json:\"nodes\"` }",
"make generate",
"make manifests",
"for _, w := range ws { // Register controller with the factory reconcilePeriod := defaultReconcilePeriod if w.ReconcilePeriod != nil { reconcilePeriod = w.ReconcilePeriod.Duration } maxConcurrentReconciles := defaultMaxConcurrentReconciles if w.MaxConcurrentReconciles != nil { maxConcurrentReconciles = *w.MaxConcurrentReconciles } r, err := reconciler.New( reconciler.WithChart(*w.Chart), reconciler.WithGroupVersionKind(w.GroupVersionKind), reconciler.WithOverrideValues(w.OverrideValues), reconciler.SkipDependentWatches(w.WatchDependentResources != nil && !*w.WatchDependentResources), reconciler.WithMaxConcurrentReconciles(maxConcurrentReconciles), reconciler.WithReconcilePeriod(reconcilePeriod), reconciler.WithInstallAnnotations(annotation.DefaultInstallAnnotations...), reconciler.WithUpgradeAnnotations(annotation.DefaultUpgradeAnnotations...), reconciler.WithUninstallAnnotations(annotation.DefaultUninstallAnnotations...), )",
"// Setup manager with Go API if err = (&controllers.MemcachedBackupReconciler{ Client: mgr.GetClient(), Scheme: mgr.GetScheme(), }).SetupWithManager(mgr); err != nil { setupLog.Error(err, \"unable to create controller\", \"controller\", \"MemcachedBackup\") os.Exit(1) } // Setup manager with Helm API for _, w := range ws { if err := r.SetupWithManager(mgr); err != nil { setupLog.Error(err, \"unable to create controller\", \"controller\", \"Helm\") os.Exit(1) } setupLog.Info(\"configured watch\", \"gvk\", w.GroupVersionKind, \"chartPath\", w.ChartPath, \"maxConcurrentReconciles\", maxConcurrentReconciles, \"reconcilePeriod\", reconcilePeriod) } // Start the manager if err := mgr.Start(ctrl.SetupSignalHandler()); err != nil { setupLog.Error(err, \"problem running manager\") os.Exit(1) }",
"--- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: manager-role rules: - apiGroups: - \"\" resources: - namespaces verbs: - get - apiGroups: - apps resources: - deployments - daemonsets - replicasets - statefulsets verbs: - create - delete - get - list - patch - update - watch - apiGroups: - cache.my.domain resources: - memcachedbackups verbs: - create - delete - get - list - patch - update - watch - apiGroups: - cache.my.domain resources: - memcachedbackups/finalizers verbs: - create - delete - get - list - patch - update - watch - apiGroups: - \"\" resources: - pods - services - services/finalizers - endpoints - persistentvolumeclaims - events - configmaps - secrets - serviceaccounts verbs: - create - delete - get - list - patch - update - watch - apiGroups: - cache.my.domain resources: - memcachedbackups/status verbs: - get - patch - update - apiGroups: - policy resources: - events - poddisruptionbudgets verbs: - create - delete - get - list - patch - update - watch - apiGroups: - cache.my.domain resources: - memcacheds - memcacheds/status - memcacheds/finalizers verbs: - create - delete - get - list - patch - update - watch",
"make install run",
"make docker-build IMG=<registry>/<user>/<image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc get deployment -n <project_name>-system",
"NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m",
"oc project <project_name>-system",
"apiVersion: cache.my.domain/v1 kind: Memcached metadata: name: memcached-sample spec: # Default values copied from <project_dir>/helm-charts/memcached/values.yaml affinity: {} autoscaling: enabled: false maxReplicas: 100 minReplicas: 1 targetCPUUtilizationPercentage: 80 fullnameOverride: \"\" image: pullPolicy: IfNotPresent repository: nginx tag: \"\" imagePullSecrets: [] ingress: annotations: {} className: \"\" enabled: false hosts: - host: chart-example.local paths: - path: / pathType: ImplementationSpecific tls: [] nameOverride: \"\" nodeSelector: {} podAnnotations: {} podSecurityContext: {} replicaCount: 3 resources: {} securityContext: {} service: port: 80 type: ClusterIP serviceAccount: annotations: {} create: true name: \"\" tolerations: []",
"oc apply -f config/samples/cache_v1_memcached.yaml",
"oc get pods",
"NAME READY STATUS RESTARTS AGE memcached-sample-6fd7c98d8-7dqdr 1/1 Running 0 18m memcached-sample-6fd7c98d8-g5k7v 1/1 Running 0 18m memcached-sample-6fd7c98d8-m7vn7 1/1 Running 0 18m",
"apiVersion: cache.my.domain/v1 kind: MemcachedBackup metadata: name: memcachedbackup-sample spec: size: 2",
"oc apply -f config/samples/cache_v1_memcachedbackup.yaml",
"oc get pods",
"NAME READY STATUS RESTARTS AGE memcachedbackup-sample-8649699989-4bbzg 1/1 Running 0 22m memcachedbackup-sample-8649699989-mq6mx 1/1 Running 0 22m",
"oc delete -f config/samples/cache_v1_memcached.yaml",
"oc delete -f config/samples/cache_v1_memcachedbackup.yaml",
"make undeploy",
"Set the Operator SDK version to use. By default, what is installed on the system is used. This is useful for CI or a project to utilize a specific version of the operator-sdk toolkit. OPERATOR_SDK_VERSION ?= v1.36.1-ocp",
"containers: - name: kube-rbac-proxy image: registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9:v4.17",
"docker-buildx: ## Build and push the Docker image for the manager for multi-platform support - docker buildx create --name project-v3-builder docker buildx use project-v3-builder - docker buildx build --push --platform=USD(PLATFORMS) --tag USD{IMG} -f Dockerfile . - docker buildx rm project-v3-builder",
"k8s.io/api v0.29.2 k8s.io/apimachinery v0.29.2 k8s.io/client-go v0.29.2 sigs.k8s.io/controller-runtime v0.17.3",
"go mod tidy",
"mkdir memcached-operator",
"cd memcached-operator",
"operator-sdk init --plugins=quarkus --domain=example.com --project-name=memcached-operator",
"operator-sdk create api --plugins quarkus --group cache --version v1 --kind Memcached",
"make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make install",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc apply -f config/samples/cache_v1_memcached.yaml -n memcached-operator-system",
"oc logs deployment.apps/memcached-operator-controller-manager -c manager -n memcached-operator-system",
"oc delete -f config/samples/cache_v1_memcached.yaml -n memcached-operator-system",
"make undeploy",
"mkdir -p USDHOME/projects/memcached-operator",
"cd USDHOME/projects/memcached-operator",
"operator-sdk init --plugins=quarkus --domain=example.com --project-name=memcached-operator",
"domain: example.com layout: - quarkus.javaoperatorsdk.io/v1-alpha projectName: memcached-operator version: \"3\"",
"operator-sdk create api --plugins=quarkus \\ 1 --group=cache \\ 2 --version=v1 \\ 3 --kind=Memcached 4",
"tree",
". ├── Makefile ├── PROJECT ├── pom.xml └── src └── main ├── java │ └── com │ └── example │ ├── Memcached.java │ ├── MemcachedReconciler.java │ ├── MemcachedSpec.java │ └── MemcachedStatus.java └── resources └── application.properties 6 directories, 8 files",
"public class MemcachedSpec { private Integer size; public Integer getSize() { return size; } public void setSize(Integer size) { this.size = size; } }",
"import java.util.ArrayList; import java.util.List; public class MemcachedStatus { // Add Status information here // Nodes are the names of the memcached pods private List<String> nodes; public List<String> getNodes() { if (nodes == null) { nodes = new ArrayList<>(); } return nodes; } public void setNodes(List<String> nodes) { this.nodes = nodes; } }",
"@Version(\"v1\") @Group(\"cache.example.com\") public class Memcached extends CustomResource<MemcachedSpec, MemcachedStatus> implements Namespaced {}",
"mvn clean install",
"cat target/kubernetes/memcacheds.cache.example.com-v1.yaml",
"Generated by Fabric8 CRDGenerator, manual edits might get overwritten! apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: memcacheds.cache.example.com spec: group: cache.example.com names: kind: Memcached plural: memcacheds singular: memcached scope: Namespaced versions: - name: v1 schema: openAPIV3Schema: properties: spec: properties: size: type: integer type: object status: properties: nodes: items: type: string type: array type: object type: object served: true storage: true subresources: status: {}",
"apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: # Add spec fields here size: 1",
"<dependency> <groupId>commons-collections</groupId> <artifactId>commons-collections</artifactId> <version>3.2.2</version> </dependency>",
"package com.example; import io.fabric8.kubernetes.client.KubernetesClient; import io.javaoperatorsdk.operator.api.reconciler.Context; import io.javaoperatorsdk.operator.api.reconciler.Reconciler; import io.javaoperatorsdk.operator.api.reconciler.UpdateControl; import io.fabric8.kubernetes.api.model.ContainerBuilder; import io.fabric8.kubernetes.api.model.ContainerPortBuilder; import io.fabric8.kubernetes.api.model.LabelSelectorBuilder; import io.fabric8.kubernetes.api.model.ObjectMetaBuilder; import io.fabric8.kubernetes.api.model.OwnerReferenceBuilder; import io.fabric8.kubernetes.api.model.Pod; import io.fabric8.kubernetes.api.model.PodSpecBuilder; import io.fabric8.kubernetes.api.model.PodTemplateSpecBuilder; import io.fabric8.kubernetes.api.model.apps.Deployment; import io.fabric8.kubernetes.api.model.apps.DeploymentBuilder; import io.fabric8.kubernetes.api.model.apps.DeploymentSpecBuilder; import org.apache.commons.collections.CollectionUtils; import java.util.HashMap; import java.util.List; import java.util.Map; import java.util.stream.Collectors; public class MemcachedReconciler implements Reconciler<Memcached> { private final KubernetesClient client; public MemcachedReconciler(KubernetesClient client) { this.client = client; } // TODO Fill in the rest of the reconciler @Override public UpdateControl<Memcached> reconcile( Memcached resource, Context context) { // TODO: fill in logic Deployment deployment = client.apps() .deployments() .inNamespace(resource.getMetadata().getNamespace()) .withName(resource.getMetadata().getName()) .get(); if (deployment == null) { Deployment newDeployment = createMemcachedDeployment(resource); client.apps().deployments().create(newDeployment); return UpdateControl.noUpdate(); } int currentReplicas = deployment.getSpec().getReplicas(); int requiredReplicas = resource.getSpec().getSize(); if (currentReplicas != requiredReplicas) { deployment.getSpec().setReplicas(requiredReplicas); client.apps().deployments().createOrReplace(deployment); return UpdateControl.noUpdate(); } List<Pod> pods = client.pods() .inNamespace(resource.getMetadata().getNamespace()) .withLabels(labelsForMemcached(resource)) .list() .getItems(); List<String> podNames = pods.stream().map(p -> p.getMetadata().getName()).collect(Collectors.toList()); if (resource.getStatus() == null || !CollectionUtils.isEqualCollection(podNames, resource.getStatus().getNodes())) { if (resource.getStatus() == null) resource.setStatus(new MemcachedStatus()); resource.getStatus().setNodes(podNames); return UpdateControl.updateResource(resource); } return UpdateControl.noUpdate(); } private Map<String, String> labelsForMemcached(Memcached m) { Map<String, String> labels = new HashMap<>(); labels.put(\"app\", \"memcached\"); labels.put(\"memcached_cr\", m.getMetadata().getName()); return labels; } private Deployment createMemcachedDeployment(Memcached m) { Deployment deployment = new DeploymentBuilder() .withMetadata( new ObjectMetaBuilder() .withName(m.getMetadata().getName()) .withNamespace(m.getMetadata().getNamespace()) .build()) .withSpec( new DeploymentSpecBuilder() .withReplicas(m.getSpec().getSize()) .withSelector( new LabelSelectorBuilder().withMatchLabels(labelsForMemcached(m)).build()) .withTemplate( new PodTemplateSpecBuilder() .withMetadata( new ObjectMetaBuilder().withLabels(labelsForMemcached(m)).build()) .withSpec( new PodSpecBuilder() .withContainers( new ContainerBuilder() .withImage(\"memcached:1.4.36-alpine\") .withName(\"memcached\") .withCommand(\"memcached\", \"-m=64\", \"-o\", \"modern\", \"-v\") .withPorts( new ContainerPortBuilder() .withContainerPort(11211) .withName(\"memcached\") .build()) .build()) .build()) .build()) .build()) .build(); deployment.addOwnerReference(m); return deployment; } }",
"Deployment deployment = client.apps() .deployments() .inNamespace(resource.getMetadata().getNamespace()) .withName(resource.getMetadata().getName()) .get();",
"if (deployment == null) { Deployment newDeployment = createMemcachedDeployment(resource); client.apps().deployments().create(newDeployment); return UpdateControl.noUpdate(); }",
"int currentReplicas = deployment.getSpec().getReplicas(); int requiredReplicas = resource.getSpec().getSize();",
"if (currentReplicas != requiredReplicas) { deployment.getSpec().setReplicas(requiredReplicas); client.apps().deployments().createOrReplace(deployment); return UpdateControl.noUpdate(); }",
"List<Pod> pods = client.pods() .inNamespace(resource.getMetadata().getNamespace()) .withLabels(labelsForMemcached(resource)) .list() .getItems(); List<String> podNames = pods.stream().map(p -> p.getMetadata().getName()).collect(Collectors.toList());",
"if (resource.getStatus() == null || !CollectionUtils.isEqualCollection(podNames, resource.getStatus().getNodes())) { if (resource.getStatus() == null) resource.setStatus(new MemcachedStatus()); resource.getStatus().setNodes(podNames); return UpdateControl.updateResource(resource); }",
"private Map<String, String> labelsForMemcached(Memcached m) { Map<String, String> labels = new HashMap<>(); labels.put(\"app\", \"memcached\"); labels.put(\"memcached_cr\", m.getMetadata().getName()); return labels; }",
"private Deployment createMemcachedDeployment(Memcached m) { Deployment deployment = new DeploymentBuilder() .withMetadata( new ObjectMetaBuilder() .withName(m.getMetadata().getName()) .withNamespace(m.getMetadata().getNamespace()) .build()) .withSpec( new DeploymentSpecBuilder() .withReplicas(m.getSpec().getSize()) .withSelector( new LabelSelectorBuilder().withMatchLabels(labelsForMemcached(m)).build()) .withTemplate( new PodTemplateSpecBuilder() .withMetadata( new ObjectMetaBuilder().withLabels(labelsForMemcached(m)).build()) .withSpec( new PodSpecBuilder() .withContainers( new ContainerBuilder() .withImage(\"memcached:1.4.36-alpine\") .withName(\"memcached\") .withCommand(\"memcached\", \"-m=64\", \"-o\", \"modern\", \"-v\") .withPorts( new ContainerPortBuilder() .withContainerPort(11211) .withName(\"memcached\") .build()) .build()) .build()) .build()) .build()) .build(); deployment.addOwnerReference(m); return deployment; }",
"mvn clean install",
"[INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 11.193 s [INFO] Finished at: 2021-05-26T12:16:54-04:00 [INFO] ------------------------------------------------------------------------",
"oc apply -f target/kubernetes/memcacheds.cache.example.com-v1.yml",
"customresourcedefinition.apiextensions.k8s.io/memcacheds.cache.example.com created",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: memcached-operator-admin subjects: - kind: ServiceAccount name: memcached-quarkus-operator-operator namespace: <operator_namespace> roleRef: kind: ClusterRole name: cluster-admin apiGroup: \"\"",
"oc apply -f rbac.yaml",
"java -jar target/quarkus-app/quarkus-run.jar",
"kubectl apply -f memcached-sample.yaml",
"memcached.cache.example.com/memcached-sample created",
"oc get all",
"NAME READY STATUS RESTARTS AGE pod/memcached-sample-6c765df685-mfqnz 1/1 Running 0 18s",
"make docker-build IMG=<registry>/<user>/<image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"oc apply -f target/kubernetes/memcacheds.cache.example.com-v1.yml",
"customresourcedefinition.apiextensions.k8s.io/memcacheds.cache.example.com created",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: memcached-operator-admin subjects: - kind: ServiceAccount name: memcached-quarkus-operator-operator namespace: <operator_namespace> roleRef: kind: ClusterRole name: cluster-admin apiGroup: \"\"",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc apply -f rbac.yaml",
"oc get all -n default",
"NAME READY UP-TO-DATE AVAILABLE AGE pod/memcached-quarkus-operator-operator-7db86ccf58-k4mlm 0/1 Running 0 18s",
"oc apply -f memcached-sample.yaml",
"memcached.cache.example.com/memcached-sample created",
"oc get all",
"NAME READY STATUS RESTARTS AGE pod/memcached-quarkus-operator-operator-7b766f4896-kxnzt 1/1 Running 1 79s pod/memcached-sample-6c765df685-mfqnz 1/1 Running 0 18s",
"make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>",
"docker push <registry>/<user>/<bundle_image_name>:<tag>",
"operator-sdk run bundle \\ 1 -n <namespace> \\ 2 <registry>/<user>/<bundle_image_name>:<tag> 3",
"Set the Operator SDK version to use. By default, what is installed on the system is used. This is useful for CI or a project to utilize a specific version of the operator-sdk toolkit. OPERATOR_SDK_VERSION ?= v1.36.1-ocp",
"containers: - name: kube-rbac-proxy image: registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9:v4.17",
"docker-buildx: ## Build and push the Docker image for the manager for multi-platform support - docker buildx create --name project-v3-builder docker buildx use project-v3-builder - docker buildx build --push --platform=USD(PLATFORMS) --tag USD{IMG} -f Dockerfile . - docker buildx rm project-v3-builder",
"k8s.io/api v0.29.2 k8s.io/apimachinery v0.29.2 k8s.io/client-go v0.29.2 sigs.k8s.io/controller-runtime v0.17.3",
"go mod tidy",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: annotations: features.operators.openshift.io/disconnected: \"true\" features.operators.openshift.io/fips-compliant: \"false\" features.operators.openshift.io/proxy-aware: \"false\" features.operators.openshift.io/tls-profiles: \"false\" features.operators.openshift.io/token-auth-aws: \"false\" features.operators.openshift.io/token-auth-azure: \"false\" features.operators.openshift.io/token-auth-gcp: \"false\"",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: annotations: operators.openshift.io/infrastructure-features: '[\"disconnected\", \"proxy-aware\"]'",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: annotations: operators.openshift.io/valid-subscription: '[\"OpenShift Container Platform\"]'",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: annotations: operators.openshift.io/valid-subscription: '[\"3Scale Commercial License\", \"Red Hat Managed Integration\"]'",
"spec: spec: containers: - command: - /manager env: - name: <related_image_environment_variable> 1 value: \"<related_image_reference_with_tag>\" 2",
"// deploymentForMemcached returns a memcached Deployment object Spec: corev1.PodSpec{ Containers: []corev1.Container{{ - Image: \"memcached:1.4.36-alpine\", 1 + Image: os.Getenv(\"<related_image_environment_variable>\"), 2 Name: \"memcached\", Command: []string{\"memcached\", \"-m=64\", \"-o\", \"modern\", \"-v\"}, Ports: []corev1.ContainerPort{{",
"spec: containers: - name: memcached command: - memcached - -m=64 - -o - modern - -v - image: \"docker.io/memcached:1.4.36-alpine\" 1 + image: \"{{ lookup('env', '<related_image_environment_variable>') }}\" 2 ports: - containerPort: 11211",
"- group: demo.example.com version: v1alpha1 kind: Memcached chart: helm-charts/memcached overrideValues: 1 relatedImage: USD{<related_image_environment_variable>} 2",
"relatedImage: \"\"",
"containers: - name: {{ .Chart.Name }} securityContext: - toYaml {{ .Values.securityContext | nindent 12 }} image: \"{{ .Values.image.pullPolicy }} env: 1 - name: related_image 2 value: \"{{ .Values.relatedImage }}\" 3",
"BUNDLE_GEN_FLAGS ?= -q --overwrite --version USD(VERSION) USD(BUNDLE_METADATA_OPTS) # USE_IMAGE_DIGESTS defines if images are resolved via tags or digests # You can enable this value if you would like to use SHA Based Digests # To enable set flag to true USE_IMAGE_DIGESTS ?= false ifeq (USD(USE_IMAGE_DIGESTS), true) BUNDLE_GEN_FLAGS += --use-image-digests endif - USD(KUSTOMIZE) build config/manifests | operator-sdk generate bundle -q --overwrite --version USD(VERSION) USD(BUNDLE_METADATA_OPTS) 1 + USD(KUSTOMIZE) build config/manifests | operator-sdk generate bundle USD(BUNDLE_GEN_FLAGS) 2",
"make bundle USE_IMAGE_DIGESTS=true",
"metadata: annotations: operators.openshift.io/infrastructure-features: '[\"disconnected\"]'",
"labels: operatorframework.io/arch.<arch>: supported 1 operatorframework.io/os.<os>: supported 2",
"labels: operatorframework.io/os.linux: supported",
"labels: operatorframework.io/arch.amd64: supported",
"labels: operatorframework.io/arch.s390x: supported operatorframework.io/os.zos: supported operatorframework.io/os.linux: supported 1 operatorframework.io/arch.amd64: supported 2",
"metadata: annotations: operatorframework.io/suggested-namespace: <namespace> 1",
"metadata: annotations: operatorframework.io/suggested-namespace-template: 1 { \"apiVersion\": \"v1\", \"kind\": \"Namespace\", \"metadata\": { \"name\": \"vertical-pod-autoscaler-suggested-template\", \"annotations\": { \"openshift.io/node-selector\": \"\" } } }",
"module github.com/example-inc/memcached-operator go 1.19 require ( k8s.io/apimachinery v0.26.0 k8s.io/client-go v0.26.0 sigs.k8s.io/controller-runtime v0.14.1 operator-framework/operator-lib v0.11.0 )",
"import ( apiv1 \"github.com/operator-framework/api/pkg/operators/v1\" ) func NewUpgradeable(cl client.Client) (Condition, error) { return NewCondition(cl, \"apiv1.OperatorUpgradeable\") } cond, err := NewUpgradeable(cl);",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: webhook-operator.v0.0.1 spec: customresourcedefinitions: owned: - kind: WebhookTest name: webhooktests.webhook.operators.coreos.io 1 version: v1 install: spec: deployments: - name: webhook-operator-webhook strategy: deployment installModes: - supported: false type: OwnNamespace - supported: false type: SingleNamespace - supported: false type: MultiNamespace - supported: true type: AllNamespaces webhookdefinitions: - type: ValidatingAdmissionWebhook 2 admissionReviewVersions: - v1beta1 - v1 containerPort: 443 targetPort: 4343 deploymentName: webhook-operator-webhook failurePolicy: Fail generateName: vwebhooktest.kb.io rules: - apiGroups: - webhook.operators.coreos.io apiVersions: - v1 operations: - CREATE - UPDATE resources: - webhooktests sideEffects: None webhookPath: /validate-webhook-operators-coreos-io-v1-webhooktest - type: MutatingAdmissionWebhook 3 admissionReviewVersions: - v1beta1 - v1 containerPort: 443 targetPort: 4343 deploymentName: webhook-operator-webhook failurePolicy: Fail generateName: mwebhooktest.kb.io rules: - apiGroups: - webhook.operators.coreos.io apiVersions: - v1 operations: - CREATE - UPDATE resources: - webhooktests sideEffects: None webhookPath: /mutate-webhook-operators-coreos-io-v1-webhooktest - type: ConversionWebhook 4 admissionReviewVersions: - v1beta1 - v1 containerPort: 443 targetPort: 4343 deploymentName: webhook-operator-webhook generateName: cwebhooktest.kb.io sideEffects: None webhookPath: /convert conversionCRDs: - webhooktests.webhook.operators.coreos.io 5",
"- displayName: MongoDB Standalone group: mongodb.com kind: MongoDbStandalone name: mongodbstandalones.mongodb.com resources: - kind: Service name: '' version: v1 - kind: StatefulSet name: '' version: v1beta2 - kind: Pod name: '' version: v1 - kind: ConfigMap name: '' version: v1 specDescriptors: - description: Credentials for Ops Manager or Cloud Manager. displayName: Credentials path: credentials x-descriptors: - 'urn:alm:descriptor:com.tectonic.ui:selector:core:v1:Secret' - description: Project this deployment belongs to. displayName: Project path: project x-descriptors: - 'urn:alm:descriptor:com.tectonic.ui:selector:core:v1:ConfigMap' - description: MongoDB version to be installed. displayName: Version path: version x-descriptors: - 'urn:alm:descriptor:com.tectonic.ui:label' statusDescriptors: - description: The status of each of the pods for the MongoDB cluster. displayName: Pod Status path: pods x-descriptors: - 'urn:alm:descriptor:com.tectonic.ui:podStatuses' version: v1 description: >- MongoDB Deployment consisting of only one host. No replication of data.",
"required: - name: etcdclusters.etcd.database.coreos.com version: v1beta2 kind: EtcdCluster displayName: etcd Cluster description: Represents a cluster of etcd nodes.",
"versions: - name: v1alpha1 served: true storage: false - name: v1beta1 1 served: true storage: true",
"customresourcedefinitions: owned: - name: cluster.example.com version: v1beta1 1 kind: cluster displayName: Cluster",
"versions: - name: v1alpha1 served: false 1 storage: true",
"versions: - name: v1alpha1 served: false storage: false 1 - name: v1beta1 served: true storage: true 2",
"versions: - name: v1beta1 served: true storage: true",
"metadata: annotations: alm-examples: >- [{\"apiVersion\":\"etcd.database.coreos.com/v1beta2\",\"kind\":\"EtcdCluster\",\"metadata\":{\"name\":\"example\",\"namespace\":\"<operator_namespace>\"},\"spec\":{\"size\":3,\"version\":\"3.2.13\"}},{\"apiVersion\":\"etcd.database.coreos.com/v1beta2\",\"kind\":\"EtcdRestore\",\"metadata\":{\"name\":\"example-etcd-cluster\"},\"spec\":{\"etcdCluster\":{\"name\":\"example-etcd-cluster\"},\"backupStorageType\":\"S3\",\"s3\":{\"path\":\"<full-s3-path>\",\"awsSecret\":\"<aws-secret>\"}}},{\"apiVersion\":\"etcd.database.coreos.com/v1beta2\",\"kind\":\"EtcdBackup\",\"metadata\":{\"name\":\"example-etcd-cluster-backup\"},\"spec\":{\"etcdEndpoints\":[\"<etcd-cluster-endpoints>\"],\"storageType\":\"S3\",\"s3\":{\"path\":\"<full-s3-path>\",\"awsSecret\":\"<aws-secret>\"}}}]",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: my-operator-v1.2.3 annotations: operators.operatorframework.io/internal-objects: '[\"my.internal.crd1.io\",\"my.internal.crd2.io\"]' 1",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: my-operator-v1.2.3 annotations: operatorframework.io/initialization-resource: |- { \"apiVersion\": \"ocs.openshift.io/v1\", \"kind\": \"StorageCluster\", \"metadata\": { \"name\": \"example-storagecluster\" }, \"spec\": { \"manageNodes\": false, \"monPVCTemplate\": { \"spec\": { \"accessModes\": [ \"ReadWriteOnce\" ], \"resources\": { \"requests\": { \"storage\": \"10Gi\" } }, \"storageClassName\": \"gp2\" } }, \"storageDeviceSets\": [ { \"count\": 3, \"dataPVCTemplate\": { \"spec\": { \"accessModes\": [ \"ReadWriteOnce\" ], \"resources\": { \"requests\": { \"storage\": \"1Ti\" } }, \"storageClassName\": \"gp2\", \"volumeMode\": \"Block\" } }, \"name\": \"example-deviceset\", \"placement\": {}, \"portable\": true, \"resources\": {} } ] } }",
"make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>",
"docker push <registry>/<user>/<bundle_image_name>:<tag>",
"operator-sdk run bundle \\ 1 -n <namespace> \\ 2 <registry>/<user>/<bundle_image_name>:<tag> 3",
"make catalog-build CATALOG_IMG=<registry>/<user>/<index_image_name>:<tag>",
"make catalog-push CATALOG_IMG=<registry>/<user>/<index_image_name>:<tag>",
"make bundle-build bundle-push catalog-build catalog-push BUNDLE_IMG=<bundle_image_pull_spec> CATALOG_IMG=<index_image_pull_spec>",
"IMAGE_TAG_BASE=quay.io/example/my-operator",
"make bundle-build bundle-push catalog-build catalog-push",
"apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: cs-memcached namespace: <operator_namespace> spec: displayName: My Test publisher: Company sourceType: grpc grpcPodConfig: securityContextConfig: <security_mode> 1 image: quay.io/example/memcached-catalog:v0.0.1 2 updateStrategy: registryPoll: interval: 10m",
"oc get catalogsource",
"NAME DISPLAY TYPE PUBLISHER AGE cs-memcached My Test grpc Company 4h31m",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: my-test namespace: <operator_namespace> spec: targetNamespaces: - <operator_namespace>",
"\\ufeffapiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: catalogtest namespace: <catalog_namespace> spec: channel: \"alpha\" installPlanApproval: Manual name: catalog source: cs-memcached sourceNamespace: <operator_namespace> startingCSV: memcached-operator.v0.0.1",
"oc get og",
"NAME AGE my-test 4h40m",
"oc get csv",
"NAME DISPLAY VERSION REPLACES PHASE memcached-operator.v0.0.1 Test 0.0.1 Succeeded",
"oc get pods",
"NAME READY STATUS RESTARTS AGE 9098d908802769fbde8bd45255e69710a9f8420a8f3d814abe88b68f8ervdj6 0/1 Completed 0 4h33m catalog-controller-manager-7fd5b7b987-69s4n 2/2 Running 0 4h32m cs-memcached-7622r 1/1 Running 0 4h33m",
"operator-sdk run bundle <registry>/<user>/memcached-operator:v0.0.1",
"INFO[0006] Creating a File-Based Catalog of the bundle \"quay.io/demo/memcached-operator:v0.0.1\" INFO[0008] Generated a valid File-Based Catalog INFO[0012] Created registry pod: quay-io-demo-memcached-operator-v1-0-1 INFO[0012] Created CatalogSource: memcached-operator-catalog INFO[0012] OperatorGroup \"operator-sdk-og\" created INFO[0012] Created Subscription: memcached-operator-v0-0-1-sub INFO[0015] Approved InstallPlan install-h9666 for the Subscription: memcached-operator-v0-0-1-sub INFO[0015] Waiting for ClusterServiceVersion \"my-project/memcached-operator.v0.0.1\" to reach 'Succeeded' phase INFO[0015] Waiting for ClusterServiceVersion \"\"my-project/memcached-operator.v0.0.1\" to appear INFO[0026] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.1\" phase: Pending INFO[0028] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.1\" phase: Installing INFO[0059] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.1\" phase: Succeeded INFO[0059] OLM has successfully installed \"memcached-operator.v0.0.1\"",
"operator-sdk run bundle-upgrade <registry>/<user>/memcached-operator:v0.0.2",
"INFO[0002] Found existing subscription with name memcached-operator-v0-0-1-sub and namespace my-project INFO[0002] Found existing catalog source with name memcached-operator-catalog and namespace my-project INFO[0008] Generated a valid Upgraded File-Based Catalog INFO[0009] Created registry pod: quay-io-demo-memcached-operator-v0-0-2 INFO[0009] Updated catalog source memcached-operator-catalog with address and annotations INFO[0010] Deleted previous registry pod with name \"quay-io-demo-memcached-operator-v0-0-1\" INFO[0041] Approved InstallPlan install-gvcjh for the Subscription: memcached-operator-v0-0-1-sub INFO[0042] Waiting for ClusterServiceVersion \"my-project/memcached-operator.v0.0.2\" to reach 'Succeeded' phase INFO[0019] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.2\" phase: Pending INFO[0042] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.2\" phase: InstallReady INFO[0043] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.2\" phase: Installing INFO[0044] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.2\" phase: Succeeded INFO[0044] Successfully upgraded to \"memcached-operator.v0.0.2\"",
"operator-sdk cleanup memcached-operator",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: annotations: \"olm.properties\": '[{\"type\": \"olm.maxOpenShiftVersion\", \"value\": \"<cluster_version>\"}]' 1",
"com.redhat.openshift.versions: \"v4.7-v4.9\" 1",
"LABEL com.redhat.openshift.versions=\"<versions>\" 1",
"spec: securityContext: seccompProfile: type: RuntimeDefault 1 runAsNonRoot: true containers: - name: <operator_workload_container> securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL",
"spec: securityContext: 1 runAsNonRoot: true containers: - name: <operator_workload_container> securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL",
"containers: - name: my-container securityContext: allowPrivilegeEscalation: false capabilities: add: - \"NET_ADMIN\"",
"install: spec: clusterPermissions: - rules: - apiGroups: - security.openshift.io resourceNames: - privileged resources: - securitycontextconstraints verbs: - use serviceAccountName: default",
"spec: apiservicedefinitions:{} description: The <operator_name> requires a privileged pod security admission label set on the Operator's namespace. The Operator's agents require escalated permissions to restart the node if the node needs remediation.",
"install: spec: clusterPermissions: - rules: - apiGroups: - \"cloudcredential.openshift.io\" resources: - credentialsrequests verbs: - create - delete - get - list - patch - update - watch",
"metadata: annotations: features.operators.openshift.io/token-auth-aws: \"true\"",
"// Get ENV var roleARN := os.Getenv(\"ROLEARN\") setupLog.Info(\"getting role ARN\", \"role ARN = \", roleARN) webIdentityTokenPath := \"/var/run/secrets/openshift/serviceaccount/token\"",
"import ( minterv1 \"github.com/openshift/cloud-credential-operator/pkg/apis/cloudcredential/v1\" corev1 \"k8s.io/api/core/v1\" metav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\" ) var in = minterv1.AWSProviderSpec{ StatementEntries: []minterv1.StatementEntry{ { Action: []string{ \"s3:*\", }, Effect: \"Allow\", Resource: \"arn:aws:s3:*:*:*\", }, }, STSIAMRoleARN: \"<role_arn>\", } var codec = minterv1.Codec var ProviderSpec, _ = codec.EncodeProviderSpec(in.DeepCopyObject()) const ( name = \"<credential_request_name>\" namespace = \"<namespace_name>\" ) var CredentialsRequestTemplate = &minterv1.CredentialsRequest{ ObjectMeta: metav1.ObjectMeta{ Name: name, Namespace: \"openshift-cloud-credential-operator\", }, Spec: minterv1.CredentialsRequestSpec{ ProviderSpec: ProviderSpec, SecretRef: corev1.ObjectReference{ Name: \"<secret_name>\", Namespace: namespace, }, ServiceAccountNames: []string{ \"<service_account_name>\", }, CloudTokenPath: \"\", }, }",
"// CredentialsRequest is a struct that represents a request for credentials type CredentialsRequest struct { APIVersion string `yaml:\"apiVersion\"` Kind string `yaml:\"kind\"` Metadata struct { Name string `yaml:\"name\"` Namespace string `yaml:\"namespace\"` } `yaml:\"metadata\"` Spec struct { SecretRef struct { Name string `yaml:\"name\"` Namespace string `yaml:\"namespace\"` } `yaml:\"secretRef\"` ProviderSpec struct { APIVersion string `yaml:\"apiVersion\"` Kind string `yaml:\"kind\"` StatementEntries []struct { Effect string `yaml:\"effect\"` Action []string `yaml:\"action\"` Resource string `yaml:\"resource\"` } `yaml:\"statementEntries\"` STSIAMRoleARN string `yaml:\"stsIAMRoleARN\"` } `yaml:\"providerSpec\"` // added new field CloudTokenPath string `yaml:\"cloudTokenPath\"` } `yaml:\"spec\"` } // ConsumeCredsRequestAddingTokenInfo is a function that takes a YAML filename and two strings as arguments // It unmarshals the YAML file to a CredentialsRequest object and adds the token information. func ConsumeCredsRequestAddingTokenInfo(fileName, tokenString, tokenPath string) (*CredentialsRequest, error) { // open a file containing YAML form of a CredentialsRequest file, err := os.Open(fileName) if err != nil { return nil, err } defer file.Close() // create a new CredentialsRequest object cr := &CredentialsRequest{} // decode the yaml file to the object decoder := yaml.NewDecoder(file) err = decoder.Decode(cr) if err != nil { return nil, err } // assign the string to the existing field in the object cr.Spec.CloudTokenPath = tokenPath // return the modified object return cr, nil }",
"// apply CredentialsRequest on install credReq := credreq.CredentialsRequestTemplate credReq.Spec.CloudTokenPath = webIdentityTokenPath c := mgr.GetClient() if err := c.Create(context.TODO(), credReq); err != nil { if !errors.IsAlreadyExists(err) { setupLog.Error(err, \"unable to create CredRequest\") os.Exit(1) } }",
"// WaitForSecret is a function that takes a Kubernetes client, a namespace, and a v1 \"k8s.io/api/core/v1\" name as arguments // It waits until the secret object with the given name exists in the given namespace // It returns the secret object or an error if the timeout is exceeded func WaitForSecret(client kubernetes.Interface, namespace, name string) (*v1.Secret, error) { // set a timeout of 10 minutes timeout := time.After(10 * time.Minute) 1 // set a polling interval of 10 seconds ticker := time.NewTicker(10 * time.Second) // loop until the timeout or the secret is found for { select { case <-timeout: // timeout is exceeded, return an error return nil, fmt.Errorf(\"timed out waiting for secret %s in namespace %s\", name, namespace) // add to this error with a pointer to instructions for following a manual path to a Secret that will work on STS case <-ticker.C: // polling interval is reached, try to get the secret secret, err := client.CoreV1().Secrets(namespace).Get(context.Background(), name, metav1.GetOptions{}) if err != nil { if errors.IsNotFound(err) { // secret does not exist yet, continue waiting continue } else { // some other error occurred, return it return nil, err } } else { // secret is found, return it return secret, nil } } } }",
"func SharedCredentialsFileFromSecret(secret *corev1.Secret) (string, error) { var data []byte switch { case len(secret.Data[\"credentials\"]) > 0: data = secret.Data[\"credentials\"] default: return \"\", errors.New(\"invalid secret for aws credentials\") } f, err := ioutil.TempFile(\"\", \"aws-shared-credentials\") if err != nil { return \"\", errors.Wrap(err, \"failed to create file for shared credentials\") } defer f.Close() if _, err := f.Write(data); err != nil { return \"\", errors.Wrapf(err, \"failed to write credentials to %s\", f.Name()) } return f.Name(), nil }",
"sharedCredentialsFile, err := SharedCredentialsFileFromSecret(secret) if err != nil { // handle error } options := session.Options{ SharedConfigState: session.SharedConfigEnable, SharedConfigFiles: []string{sharedCredentialsFile}, }",
"#!/bin/bash set -x AWS_ACCOUNT_ID=USD(aws sts get-caller-identity --query \"Account\" --output text) OIDC_PROVIDER=USD(oc get authentication cluster -ojson | jq -r .spec.serviceAccountIssuer | sed -e \"s/^https:\\/\\///\") NAMESPACE=my-namespace SERVICE_ACCOUNT_NAME=\"my-service-account\" POLICY_ARN_STRINGS=\"arn:aws:iam::aws:policy/AmazonS3FullAccess\" read -r -d '' TRUST_RELATIONSHIP <<EOF { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"Federated\": \"arn:aws:iam::USD{AWS_ACCOUNT_ID}:oidc-provider/USD{OIDC_PROVIDER}\" }, \"Action\": \"sts:AssumeRoleWithWebIdentity\", \"Condition\": { \"StringEquals\": { \"USD{OIDC_PROVIDER}:sub\": \"system:serviceaccount:USD{NAMESPACE}:USD{SERVICE_ACCOUNT_NAME}\" } } } ] } EOF echo \"USD{TRUST_RELATIONSHIP}\" > trust.json aws iam create-role --role-name \"USDSERVICE_ACCOUNT_NAME\" --assume-role-policy-document file://trust.json --description \"role for demo\" while IFS= read -r POLICY_ARN; do echo -n \"Attaching USDPOLICY_ARN ... \" aws iam attach-role-policy --role-name \"USDSERVICE_ACCOUNT_NAME\" --policy-arn \"USD{POLICY_ARN}\" echo \"ok.\" done <<< \"USDPOLICY_ARN_STRINGS\"",
"oc exec operator-pod -n <namespace_name> -- cat /var/run/secrets/openshift/serviceaccount/token",
"oc exec operator-pod -n <namespace_name> -- cat /<path>/<to>/<secret_name> 1",
"aws sts assume-role-with-web-identity --role-arn USDROLEARN --role-session-name <session_name> --web-identity-token USDTOKEN",
"install: spec: clusterPermissions: - rules: - apiGroups: - \"cloudcredential.openshift.io\" resources: - credentialsrequests verbs: - create - delete - get - list - patch - update - watch",
"metadata: annotations: features.operators.openshift.io/token-auth-azure: \"true\"",
"// Get ENV var clientID := os.Getenv(\"CLIENTID\") tenantID := os.Getenv(\"TENANTID\") subscriptionID := os.Getenv(\"SUBSCRIPTIONID\") azureFederatedTokenFile := \"/var/run/secrets/openshift/serviceaccount/token\"",
"// apply CredentialsRequest on install credReqTemplate.Spec.AzureProviderSpec.AzureClientID = clientID credReqTemplate.Spec.AzureProviderSpec.AzureTenantID = tenantID credReqTemplate.Spec.AzureProviderSpec.AzureRegion = \"centralus\" credReqTemplate.Spec.AzureProviderSpec.AzureSubscriptionID = subscriptionID credReqTemplate.CloudTokenPath = azureFederatedTokenFile c := mgr.GetClient() if err := c.Create(context.TODO(), credReq); err != nil { if !errors.IsAlreadyExists(err) { setupLog.Error(err, \"unable to create CredRequest\") os.Exit(1) } }",
"// WaitForSecret is a function that takes a Kubernetes client, a namespace, and a v1 \"k8s.io/api/core/v1\" name as arguments // It waits until the secret object with the given name exists in the given namespace // It returns the secret object or an error if the timeout is exceeded func WaitForSecret(client kubernetes.Interface, namespace, name string) (*v1.Secret, error) { // set a timeout of 10 minutes timeout := time.After(10 * time.Minute) 1 // set a polling interval of 10 seconds ticker := time.NewTicker(10 * time.Second) // loop until the timeout or the secret is found for { select { case <-timeout: // timeout is exceeded, return an error return nil, fmt.Errorf(\"timed out waiting for secret %s in namespace %s\", name, namespace) // add to this error with a pointer to instructions for following a manual path to a Secret that will work on STS case <-ticker.C: // polling interval is reached, try to get the secret secret, err := client.CoreV1().Secrets(namespace).Get(context.Background(), name, metav1.GetOptions{}) if err != nil { if errors.IsNotFound(err) { // secret does not exist yet, continue waiting continue } else { // some other error occurred, return it return nil, err } } else { // secret is found, return it return secret, nil } } } }",
"//iam.googleapis.com/projects/<project_number>/locations/global/workloadIdentityPools/<pool_id>/providers/<provider_id>",
"<service_account_name>@<project_id>.iam.gserviceaccount.com",
"volumeMounts: - name: bound-sa-token mountPath: /var/run/secrets/openshift/serviceaccount readOnly: true volumes: # This service account token can be used to provide identity outside the cluster. - name: bound-sa-token projected: sources: - serviceAccountToken: path: token audience: openshift",
"install: spec: clusterPermissions: - rules: - apiGroups: - \"cloudcredential.openshift.io\" resources: - credentialsrequests verbs: - create - delete - get - list - patch - update - watch",
"metadata: annotations: features.operators.openshift.io/token-auth-gcp: \"true\"",
"// Get ENV var audience := os.Getenv(\"AUDIENCE\") serviceAccountEmail := os.Getenv(\"SERVICE_ACCOUNT_EMAIL\") gcpIdentityTokenFile := \"/var/run/secrets/openshift/serviceaccount/token\"",
"// apply CredentialsRequest on install credReqTemplate.Spec.GCPProviderSpec.Audience = audience credReqTemplate.Spec.GCPProviderSpec.ServiceAccountEmail = serviceAccountEmail credReqTemplate.CloudTokenPath = gcpIdentityTokenFile c := mgr.GetClient() if err := c.Create(context.TODO(), credReq); err != nil { if !errors.IsAlreadyExists(err) { setupLog.Error(err, \"unable to create CredRequest\") os.Exit(1) } }",
"// WaitForSecret is a function that takes a Kubernetes client, a namespace, and a v1 \"k8s.io/api/core/v1\" name as arguments // It waits until the secret object with the given name exists in the given namespace // It returns the secret object or an error if the timeout is exceeded func WaitForSecret(client kubernetes.Interface, namespace, name string) (*v1.Secret, error) { // set a timeout of 10 minutes timeout := time.After(10 * time.Minute) 1 // set a polling interval of 10 seconds ticker := time.NewTicker(10 * time.Second) // loop until the timeout or the secret is found for { select { case <-timeout: // timeout is exceeded, return an error return nil, fmt.Errorf(\"timed out waiting for secret %s in namespace %s\", name, namespace) // add to this error with a pointer to instructions for following a manual path to a Secret that will work case <-ticker.C: // polling interval is reached, try to get the secret secret, err := client.CoreV1().Secrets(namespace).Get(context.Background(), name, metav1.GetOptions{}) if err != nil { if errors.IsNotFound(err) { // secret does not exist yet, continue waiting continue } else { // some other error occurred, return it return nil, err } } else { // secret is found, return it return secret, nil } } } }",
"service_account_json := secret.StringData[\"service_account.json\"]",
"operator-sdk scorecard <bundle_dir_or_image> [flags]",
"operator-sdk scorecard -h",
"./bundle └── tests └── scorecard └── config.yaml",
"kind: Configuration apiversion: scorecard.operatorframework.io/v1alpha3 metadata: name: config stages: - parallel: true tests: - image: quay.io/operator-framework/scorecard-test:v1.36.1 entrypoint: - scorecard-test - basic-check-spec labels: suite: basic test: basic-check-spec-test - image: quay.io/operator-framework/scorecard-test:v1.36.1 entrypoint: - scorecard-test - olm-bundle-validation labels: suite: olm test: olm-bundle-validation-test",
"make bundle",
"operator-sdk scorecard <bundle_dir_or_image>",
"{ \"apiVersion\": \"scorecard.operatorframework.io/v1alpha3\", \"kind\": \"TestList\", \"items\": [ { \"kind\": \"Test\", \"apiVersion\": \"scorecard.operatorframework.io/v1alpha3\", \"spec\": { \"image\": \"quay.io/operator-framework/scorecard-test:v1.36.1\", \"entrypoint\": [ \"scorecard-test\", \"olm-bundle-validation\" ], \"labels\": { \"suite\": \"olm\", \"test\": \"olm-bundle-validation-test\" } }, \"status\": { \"results\": [ { \"name\": \"olm-bundle-validation\", \"log\": \"time=\\\"2020-06-10T19:02:49Z\\\" level=debug msg=\\\"Found manifests directory\\\" name=bundle-test\\ntime=\\\"2020-06-10T19:02:49Z\\\" level=debug msg=\\\"Found metadata directory\\\" name=bundle-test\\ntime=\\\"2020-06-10T19:02:49Z\\\" level=debug msg=\\\"Getting mediaType info from manifests directory\\\" name=bundle-test\\ntime=\\\"2020-06-10T19:02:49Z\\\" level=info msg=\\\"Found annotations file\\\" name=bundle-test\\ntime=\\\"2020-06-10T19:02:49Z\\\" level=info msg=\\\"Could not find optional dependencies file\\\" name=bundle-test\\n\", \"state\": \"pass\" } ] } } ] }",
"-------------------------------------------------------------------------------- Image: quay.io/operator-framework/scorecard-test:v1.36.1 Entrypoint: [scorecard-test olm-bundle-validation] Labels: \"suite\":\"olm\" \"test\":\"olm-bundle-validation-test\" Results: Name: olm-bundle-validation State: pass Log: time=\"2020-07-15T03:19:02Z\" level=debug msg=\"Found manifests directory\" name=bundle-test time=\"2020-07-15T03:19:02Z\" level=debug msg=\"Found metadata directory\" name=bundle-test time=\"2020-07-15T03:19:02Z\" level=debug msg=\"Getting mediaType info from manifests directory\" name=bundle-test time=\"2020-07-15T03:19:02Z\" level=info msg=\"Found annotations file\" name=bundle-test time=\"2020-07-15T03:19:02Z\" level=info msg=\"Could not find optional dependencies file\" name=bundle-test",
"operator-sdk scorecard <bundle_dir_or_image> -o text --selector=test=basic-check-spec-test",
"operator-sdk scorecard <bundle_dir_or_image> -o text --selector=suite=olm",
"operator-sdk scorecard <bundle_dir_or_image> -o text --selector='test in (basic-check-spec-test,olm-bundle-validation-test)'",
"apiVersion: scorecard.operatorframework.io/v1alpha3 kind: Configuration metadata: name: config stages: - parallel: true 1 tests: - entrypoint: - scorecard-test - basic-check-spec image: quay.io/operator-framework/scorecard-test:v1.36.1 labels: suite: basic test: basic-check-spec-test - entrypoint: - scorecard-test - olm-bundle-validation image: quay.io/operator-framework/scorecard-test:v1.36.1 labels: suite: olm test: olm-bundle-validation-test",
"// Copyright 2020 The Operator-SDK Authors // // Licensed under the Apache License, Version 2.0 (the \"License\"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an \"AS IS\" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package main import ( \"encoding/json\" \"fmt\" \"log\" \"os\" scapiv1alpha3 \"github.com/operator-framework/api/pkg/apis/scorecard/v1alpha3\" apimanifests \"github.com/operator-framework/api/pkg/manifests\" ) // This is the custom scorecard test example binary // As with the Redhat scorecard test image, the bundle that is under // test is expected to be mounted so that tests can inspect the // bundle contents as part of their test implementations. // The actual test is to be run is named and that name is passed // as an argument to this binary. This argument mechanism allows // this binary to run various tests all from within a single // test image. const PodBundleRoot = \"/bundle\" func main() { entrypoint := os.Args[1:] if len(entrypoint) == 0 { log.Fatal(\"Test name argument is required\") } // Read the pod's untar'd bundle from a well-known path. cfg, err := apimanifests.GetBundleFromDir(PodBundleRoot) if err != nil { log.Fatal(err.Error()) } var result scapiv1alpha3.TestStatus // Names of the custom tests which would be passed in the // `operator-sdk` command. switch entrypoint[0] { case CustomTest1Name: result = CustomTest1(cfg) case CustomTest2Name: result = CustomTest2(cfg) default: result = printValidTests() } // Convert scapiv1alpha3.TestResult to json. prettyJSON, err := json.MarshalIndent(result, \"\", \" \") if err != nil { log.Fatal(\"Failed to generate json\", err) } fmt.Printf(\"%s\\n\", string(prettyJSON)) } // printValidTests will print out full list of test names to give a hint to the end user on what the valid tests are. func printValidTests() scapiv1alpha3.TestStatus { result := scapiv1alpha3.TestResult{} result.State = scapiv1alpha3.FailState result.Errors = make([]string, 0) result.Suggestions = make([]string, 0) str := fmt.Sprintf(\"Valid tests for this image include: %s %s\", CustomTest1Name, CustomTest2Name) result.Errors = append(result.Errors, str) return scapiv1alpha3.TestStatus{ Results: []scapiv1alpha3.TestResult{result}, } } const ( CustomTest1Name = \"customtest1\" CustomTest2Name = \"customtest2\" ) // Define any operator specific custom tests here. // CustomTest1 and CustomTest2 are example test functions. Relevant operator specific // test logic is to be implemented in similarly. func CustomTest1(bundle *apimanifests.Bundle) scapiv1alpha3.TestStatus { r := scapiv1alpha3.TestResult{} r.Name = CustomTest1Name r.State = scapiv1alpha3.PassState r.Errors = make([]string, 0) r.Suggestions = make([]string, 0) almExamples := bundle.CSV.GetAnnotations()[\"alm-examples\"] if almExamples == \"\" { fmt.Println(\"no alm-examples in the bundle CSV\") } return wrapResult(r) } func CustomTest2(bundle *apimanifests.Bundle) scapiv1alpha3.TestStatus { r := scapiv1alpha3.TestResult{} r.Name = CustomTest2Name r.State = scapiv1alpha3.PassState r.Errors = make([]string, 0) r.Suggestions = make([]string, 0) almExamples := bundle.CSV.GetAnnotations()[\"alm-examples\"] if almExamples == \"\" { fmt.Println(\"no alm-examples in the bundle CSV\") } return wrapResult(r) } func wrapResult(r scapiv1alpha3.TestResult) scapiv1alpha3.TestStatus { return scapiv1alpha3.TestStatus{ Results: []scapiv1alpha3.TestResult{r}, } }",
"operator-sdk bundle validate <bundle_dir_or_image> <flags>",
"./bundle ├── manifests │ ├── cache.my.domain_memcacheds.yaml │ └── memcached-operator.clusterserviceversion.yaml └── metadata └── annotations.yaml",
"INFO[0000] All validation tests have completed successfully",
"ERRO[0000] Error: Value cache.example.com/v1alpha1, Kind=Memcached: CRD \"cache.example.com/v1alpha1, Kind=Memcached\" is present in bundle \"\" but not defined in CSV",
"WARN[0000] Warning: Value : (memcached-operator.v0.0.1) annotations not found INFO[0000] All validation tests have completed successfully",
"operator-sdk bundle validate -h",
"operator-sdk bundle validate <bundle_dir_or_image> --select-optional <test_label>",
"operator-sdk bundle validate ./bundle",
"operator-sdk bundle validate <bundle_registry>/<bundle_image_name>:<tag>",
"operator-sdk bundle validate <bundle_dir_or_image> --select-optional <test_label>",
"ERRO[0000] Error: Value apiextensions.k8s.io/v1, Kind=CustomResource: unsupported media type registry+v1 for bundle object WARN[0000] Warning: Value k8sevent.v0.0.1: owned CRD \"k8sevents.k8s.k8sevent.com\" has an empty description",
"operator-sdk bundle validate ./bundle --select-optional name=multiarch",
"INFO[0020] All validation tests have completed successfully",
"ERRO[0016] Error: Value test-operator.v0.0.1: not all images specified are providing the support described via the CSV labels. Note that (SO.architecture): (linux.ppc64le) was not found for the image(s) [quay.io/example-org/test-operator:v1alpha1] ERRO[0016] Error: Value test-operator.v0.0.1: not all images specified are providing the support described via the CSV labels. Note that (SO.architecture): (linux.s390x) was not found for the image(s) [quay.io/example-org/test-operator:v1alpha1] ERRO[0016] Error: Value test-operator.v0.0.1: not all images specified are providing the support described via the CSV labels. Note that (SO.architecture): (linux.amd64) was not found for the image(s) [quay.io/example-org/test-operator:v1alpha1] ERRO[0016] Error: Value test-operator.v0.0.1: not all images specified are providing the support described via the CSV labels. Note that (SO.architecture): (linux.arm64) was not found for the image(s) [quay.io/example-org/test-operator:v1alpha1]",
"WARN[0014] Warning: Value test-operator.v0.0.1: check if the CSV is missing the label (operatorframework.io/arch.<value>) for the Arch(s): [\"amd64\" \"arm64\" \"ppc64le\" \"s390x\"]. Be aware that your Operator manager image [\"quay.io/example-org/test-operator:v1alpha1\"] provides this support. Thus, it is very likely that you want to provide it and if you support more than amd64 architectures, you MUST,use the required labels for all which are supported.Otherwise, your solution cannot be listed on the cluster for these architectures",
"// Simple query nn := types.NamespacedName{ Name: \"cluster\", } infraConfig := &configv1.Infrastructure{} err = crClient.Get(context.Background(), nn, infraConfig) if err != nil { return err } fmt.Printf(\"using crclient: %v\\n\", infraConfig.Status.ControlPlaneTopology) fmt.Printf(\"using crclient: %v\\n\", infraConfig.Status.InfrastructureTopology)",
"operatorConfigInformer := configinformer.NewSharedInformerFactoryWithOptions(configClient, 2*time.Second) infrastructureLister = operatorConfigInformer.Config().V1().Infrastructures().Lister() infraConfig, err := configClient.ConfigV1().Infrastructures().Get(context.Background(), \"cluster\", metav1.GetOptions{}) if err != nil { return err } // fmt.Printf(\"%v\\n\", infraConfig) fmt.Printf(\"%v\\n\", infraConfig.Status.ControlPlaneTopology) fmt.Printf(\"%v\\n\", infraConfig.Status.InfrastructureTopology)",
"../prometheus",
"package controllers import ( \"github.com/prometheus/client_golang/prometheus\" \"sigs.k8s.io/controller-runtime/pkg/metrics\" ) var ( widgets = prometheus.NewCounter( prometheus.CounterOpts{ Name: \"widgets_total\", Help: \"Number of widgets processed\", }, ) widgetFailures = prometheus.NewCounter( prometheus.CounterOpts{ Name: \"widget_failures_total\", Help: \"Number of failed widgets\", }, ) ) func init() { // Register custom metrics with the global prometheus registry metrics.Registry.MustRegister(widgets, widgetFailures) }",
"func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { // Add metrics widgets.Inc() widgetFailures.Inc() return ctrl.Result{}, nil }",
"make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: prometheus-k8s-role namespace: memcached-operator-system rules: - apiGroups: - \"\" resources: - endpoints - pods - services - nodes - secrets verbs: - get - list - watch",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: prometheus-k8s-rolebinding namespace: memcached-operator-system roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: prometheus-k8s-role subjects: - kind: ServiceAccount name: prometheus-k8s namespace: openshift-monitoring",
"oc apply -f config/prometheus/role.yaml",
"oc apply -f config/prometheus/rolebinding.yaml",
"oc label namespace <operator_namespace> openshift.io/cluster-monitoring=\"true\"",
"operator-sdk init --plugins=ansible --domain=testmetrics.com",
"operator-sdk create api --group metrics --version v1 --kind Testmetrics --generate-role",
"--- tasks file for Memcached - name: start k8sstatus k8s: definition: kind: Deployment apiVersion: apps/v1 metadata: name: '{{ ansible_operator_meta.name }}-memcached' namespace: '{{ ansible_operator_meta.namespace }}' spec: replicas: \"{{size}}\" selector: matchLabels: app: memcached template: metadata: labels: app: memcached spec: containers: - name: memcached command: - memcached - -m=64 - -o - modern - -v image: \"docker.io/memcached:1.4.36-alpine\" ports: - containerPort: 11211 - osdk_metric: name: my_thing_counter description: This metric counts things counter: {} - osdk_metric: name: my_counter_metric description: Add 3.14 to the counter counter: increment: yes - osdk_metric: name: my_gauge_metric description: Create my gauge and set it to 2. gauge: set: 2 - osdk_metric: name: my_histogram_metric description: Observe my histogram histogram: observe: 2 - osdk_metric: name: my_summary_metric description: Observe my summary summary: observe: 2",
"make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make install",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"apiVersion: metrics.testmetrics.com/v1 kind: Testmetrics metadata: name: testmetrics-sample spec: size: 1",
"oc create -f config/samples/metrics_v1_testmetrics.yaml",
"oc get pods",
"NAME READY STATUS RESTARTS AGE ansiblemetrics-controller-manager-<id> 2/2 Running 0 149m testmetrics-sample-memcached-<id> 1/1 Running 0 147m",
"oc get ep",
"NAME ENDPOINTS AGE ansiblemetrics-controller-manager-metrics-service 10.129.2.70:8443 150m",
"token=`oc create token prometheus-k8s -n openshift-monitoring`",
"oc exec ansiblemetrics-controller-manager-<id> -- curl -k -H \"Authoriza tion: Bearer USDtoken\" 'https://10.129.2.70:8443/metrics' | grep my_counter",
"HELP my_counter_metric Add 3.14 to the counter TYPE my_counter_metric counter my_counter_metric 2",
"oc exec ansiblemetrics-controller-manager-<id> -- curl -k -H \"Authoriza tion: Bearer USDtoken\" 'https://10.129.2.70:8443/metrics' | grep gauge",
"HELP my_gauge_metric Create my gauge and set it to 2.",
"oc exec ansiblemetrics-controller-manager-<id> -- curl -k -H \"Authoriza tion: Bearer USDtoken\" 'https://10.129.2.70:8443/metrics' | grep Observe",
"HELP my_histogram_metric Observe my histogram HELP my_summary_metric Observe my summary",
"import ( \"github.com/operator-framework/operator-sdk/pkg/leader\" ) func main() { err = leader.Become(context.TODO(), \"memcached-operator-lock\") if err != nil { log.Error(err, \"Failed to retry for leader lock\") os.Exit(1) } }",
"import ( \"sigs.k8s.io/controller-runtime/pkg/manager\" ) func main() { opts := manager.Options{ LeaderElection: true, LeaderElectionID: \"memcached-operator-lock\" } mgr, err := manager.New(cfg, opts) }",
"docker manifest inspect <image_manifest> 1",
"{ \"manifests\": [ { \"digest\": \"sha256:c0669ef34cdc14332c0f1ab0c2c01acb91d96014b172f1a76f3a39e63d1f0bda\", \"mediaType\": \"application/vnd.docker.distribution.manifest.v2+json\", \"platform\": { \"architecture\": \"amd64\", \"os\": \"linux\" }, \"size\": 528 }, { \"digest\": \"sha256:30e6d35703c578ee703230b9dc87ada2ba958c1928615ac8a674fcbbcbb0f281\", \"mediaType\": \"application/vnd.docker.distribution.manifest.v2+json\", \"platform\": { \"architecture\": \"arm64\", \"os\": \"linux\", \"variant\": \"v8\" }, \"size\": 528 }, ] }",
"docker inspect <image>",
"FROM golang:1.19 as builder ARG TARGETOS ARG TARGETARCH RUN CGO_ENABLED=0 GOOS=USD{TARGETOS:-linux} GOARCH=USD{TARGETARCH} go build -a -o manager main.go 1",
"PLATFORMS ?= linux/arm64,linux/amd64 1 .PHONY: docker-buildx",
"make docker-buildx IMG=<image_registry>/<organization_name>/<repository_name>:<version_or_sha>",
"apiVersion: v1 kind: Pod metadata: name: s1 spec: containers: - name: <container_name> image: docker.io/<org>/<image_name>",
"apiVersion: v1 kind: Pod metadata: name: s1 spec: containers: - name: <container_name> image: docker.io/<org>/<image_name> affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: 1 nodeSelectorTerms: 2 - matchExpressions: 3 - key: kubernetes.io/arch 4 operator: In values: - amd64 - arm64 - ppc64le - s390x - key: kubernetes.io/os 5 operator: In values: - linux",
"Template: corev1.PodTemplateSpec{ Spec: corev1.PodSpec{ Affinity: &corev1.Affinity{ NodeAffinity: &corev1.NodeAffinity{ RequiredDuringSchedulingIgnoredDuringExecution: &corev1.NodeSelector{ NodeSelectorTerms: []corev1.NodeSelectorTerm{ { MatchExpressions: []corev1.NodeSelectorRequirement{ { Key: \"kubernetes.io/arch\", Operator: \"In\", Values: []string{\"amd64\",\"arm64\",\"ppc64le\",\"s390x\"}, }, { Key: \"kubernetes.io/os\", Operator: \"In\", Values: []string{\"linux\"}, }, }, }, }, }, }, }, SecurityContext: &corev1.PodSecurityContext{ }, Containers: []corev1.Container{{ }}, },",
"apiVersion: v1 kind: Pod metadata: name: s1 spec: containers: - name: <container_name> image: docker.io/<org>/<image_name>",
"apiVersion: v1 kind: Pod metadata: name: s1 spec: containers: - name: <container_name> image: docker.io/<org>/<image_name> affinity: nodeAffinity: preferredDuringSchedulingIgnoredDuringExecution: 1 - preference: matchExpressions: 2 - key: kubernetes.io/arch 3 operator: In 4 values: - amd64 - arm64 weight: 90 5",
"cfg = Config{ log: logf.Log.WithName(\"prune\"), DryRun: false, Clientset: client, LabelSelector: \"app=<operator_name>\", Resources: []schema.GroupVersionKind{ {Group: \"\", Version: \"\", Kind: PodKind}, }, Namespaces: []string{\"<operator_namespace>\"}, Strategy: StrategyConfig{ Mode: MaxCountStrategy, MaxCountSetting: 1, }, PreDeleteHook: myhook, }",
"err := cfg.Execute(ctx)",
"packagemanifests/ └── etcd ├── 0.0.1 │ ├── etcdcluster.crd.yaml │ └── etcdoperator.clusterserviceversion.yaml ├── 0.0.2 │ ├── etcdbackup.crd.yaml │ ├── etcdcluster.crd.yaml │ ├── etcdoperator.v0.0.2.clusterserviceversion.yaml │ └── etcdrestore.crd.yaml └── etcd.package.yaml",
"bundle/ ├── bundle-0.0.1 │ ├── bundle.Dockerfile │ ├── manifests │ │ ├── etcdcluster.crd.yaml │ │ ├── etcdoperator.clusterserviceversion.yaml │ ├── metadata │ │ └── annotations.yaml │ └── tests │ └── scorecard │ └── config.yaml └── bundle-0.0.2 ├── bundle.Dockerfile ├── manifests │ ├── etcdbackup.crd.yaml │ ├── etcdcluster.crd.yaml │ ├── etcdoperator.v0.0.2.clusterserviceversion.yaml │ ├── etcdrestore.crd.yaml ├── metadata │ └── annotations.yaml └── tests └── scorecard └── config.yaml",
"operator-sdk pkgman-to-bundle <package_manifests_dir> \\ 1 [--output-dir <directory>] \\ 2 --image-tag-base <image_name_base> 3",
"operator-sdk run bundle <bundle_image_name>:<tag>",
"INFO[0025] Successfully created registry pod: quay-io-my-etcd-0-9-4 INFO[0025] Created CatalogSource: etcd-catalog INFO[0026] OperatorGroup \"operator-sdk-og\" created INFO[0026] Created Subscription: etcdoperator-v0-9-4-sub INFO[0031] Approved InstallPlan install-5t58z for the Subscription: etcdoperator-v0-9-4-sub INFO[0031] Waiting for ClusterServiceVersion \"default/etcdoperator.v0.9.4\" to reach 'Succeeded' phase INFO[0032] Waiting for ClusterServiceVersion \"default/etcdoperator.v0.9.4\" to appear INFO[0048] Found ClusterServiceVersion \"default/etcdoperator.v0.9.4\" phase: Pending INFO[0049] Found ClusterServiceVersion \"default/etcdoperator.v0.9.4\" phase: Installing INFO[0064] Found ClusterServiceVersion \"default/etcdoperator.v0.9.4\" phase: Succeeded INFO[0065] OLM has successfully installed \"etcdoperator.v0.9.4\"",
"operator-sdk <command> [<subcommand>] [<argument>] [<flags>]",
"operator-sdk completion bash",
"bash completion for operator-sdk -*- shell-script -*- ex: ts=4 sw=4 et filetype=sh",
"oc get clusteroperator authentication -o yaml",
"oc -n openshift-monitoring edit cm cluster-monitoring-config",
"oc edit etcd cluster",
"oc get clusteringresses.ingress.openshift.io -n openshift-ingress-operator default -o yaml",
"oc get deployment -n openshift-ingress",
"oc get network/cluster -o jsonpath='{.status.clusterNetwork[*]}'",
"map[cidr:10.128.0.0/14 hostPrefix:23]",
"oc edit kubeapiserver",
"oc get clusteroperator openshift-controller-manager -o yaml",
"oc get crd openshiftcontrollermanagers.operator.openshift.io -o yaml"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html-single/operators/index
|
6.3. Performing a Driver Update During Installation
|
6.3. Performing a Driver Update During Installation At the very beginning of the installation process, you can perform a driver update in the following ways: let the installation program automatically find and offer a driver update for installation, let the installation program prompt you to locate a driver update, manually specify a path to a driver update image or an RPM package. Important Always make sure to put your driver update discs on a standard disk partition. Advanced storage, such as RAID or LVM volumes, might not be accessible during the early stage of the installation when you perform driver updates. 6.3.1. Automatic Driver Update To have the installation program automatically recognize a driver update disc, connect a block device with the OEMDRV volume label to your computer before starting the installation process. Note Starting with Red Hat Enterprise Linux 7.2, you can also use the OEMDRV block device to automatically load a Kickstart file. This file must be named ks.cfg and placed in the root of the device to be loaded. See Chapter 27, Kickstart Installations for more information about Kickstart installations. When the installation begins, the installation program detects all available storage connected to the system. If it finds a storage device labeled OEMDRV , it will treat it as a driver update disc and attempt to load driver updates from this device. You will be prompted to select which drivers to load: Figure 6.1. Selecting a Driver Use number keys to toggle selection on individual drivers. When ready, press c to install the selected drivers and proceed to the Anaconda graphical user interface. 6.3.2. Assisted Driver Update It is always recommended to have a block device with the OEMDRV volume label available to install a driver during installation. However, if no such device is detected and the inst.dd option was specified at the boot command line, the installation program lets you find the driver disk in interactive mode. In the first step, select a local disk partition from the list for Anaconda to scan for ISO files. Then, select one of the detected ISO files. Finally, select one or more available drivers. The image below demonstrates the process in the text user interface with individual steps highlighted. Figure 6.2. Selecting a Driver Interactively Note If you extracted your ISO image file and burned it on a CD or DVD but the media does not have the OEMDRV volume label, either use the inst.dd option with no arguments and use the menu to select the device, or use the following boot option for the installation program to scan the media for drivers: Hit number keys to toggle selection on individual drivers. When ready, press c to install the selected drivers and proceed to the Anaconda graphical user interface. 6.3.3. Manual Driver Update For manual driver installation, prepare an ISO image file containing your drivers to an accessible location, such a USB flash drive or a web server, and connect it to your computer. At the welcome screen, hit Tab to display the boot command line and append the inst.dd= location to it, where location is a path to the driver update disc: Figure 6.3. Specifying a Path to a Driver Update Typically, the image file is located on a web server (for example, http://server.example.com/dd.iso ) or on a USB flash drive (for example, /dev/sdb1 ). It is also possible to specify an RPM package containing the driver update (for example http://server.example.com/dd.rpm ). When ready, hit Enter to execute the boot command. Then, your selected drivers will be loaded and the installation process will proceed normally. 6.3.4. Blacklisting a Driver A malfunctioning driver can prevent a system from booting normally during installation. When this happens, you can disable (or blacklist) the driver by customizing the boot command line. At the boot menu, display the boot command line by hitting the Tab key. Then, append the modprobe.blacklist= driver_name option to it. Replace driver_name with names of a driver or drivers you want to disable, for example: Note that the drivers blacklisted during installation using the modprobe.blacklist= boot option will remain disabled on the installed system and appear in the /etc/modprobe.d/anaconda-blacklist.conf file. See Chapter 23, Boot Options for more information about blacklisting drivers and other boot options.
|
[
"inst.dd=/dev/sr0",
"modprobe.blacklist=ahci"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/installation_guide/sect-driver-updates-performing-x86
|
Chapter 38. Authentication and Interoperability
|
Chapter 38. Authentication and Interoperability Use of AD and LDAP sudo providers The Active Directory (AD) provider is a back end used to connect to an AD server. Starting with Red Hat Enterprise Linux 7.2, using the AD sudo provider together with the LDAP provider is available as a Technology Preview. To enable the AD sudo provider, add the sudo_provider=ad setting in the [domain] section of the sssd.conf file. (BZ# 1068725 ) DNSSEC available as Technology Preview in IdM Identity Management (IdM) servers with integrated DNS now support DNS Security Extensions (DNSSEC), a set of extensions to DNS that enhance security of the DNS protocol. DNS zones hosted on IdM servers can be automatically signed using DNSSEC. The cryptographic keys are automatically generated and rotated. Users who decide to secure their DNS zones with DNSSEC are advised to read and follow these documents: DNSSEC Operational Practices, Version 2: http://tools.ietf.org/html/rfc6781#section-2 Secure Domain Name System (DNS) Deployment Guide: http://dx.doi.org/10.6028/NIST.SP.800-81-2 DNSSEC Key Rollover Timing Considerations: http://tools.ietf.org/html/rfc7583 Note that IdM servers with integrated DNS use DNSSEC to validate DNS answers obtained from other DNS servers. This might affect the availability of DNS zones that are not configured in accordance with recommended naming practices described in the Red Hat Enterprise Linux Networking Guide: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Networking_Guide/ch-Configure_Host_Names.html#sec-Recommended_Naming_Practices . (BZ# 1115294 ) Identity Management JSON-RPC API available as Technology Preview An API is available for Identity Management (IdM). To view the API, IdM also provides an API browser as Technology Preview. In Red Hat Enterprise Linux 7.3, the IdM API was enhanced to enable multiple versions of API commands. Previously, enhancements could change the behavior of a command in an incompatible way. Users are now able to continue using existing tools and scripts even if the IdM API changes. This enables: Administrators to use or later versions of IdM on the server than on the managing client. Developers to use a specific version of an IdM call, even if the IdM version changes on the server. In all cases, the communication with the server is possible, regardless if one side uses, for example, a newer version that introduces new options for a feature. For details on using the API, see https://access.redhat.com/articles/2728021 (BZ# 1298286 ) The Custodia secrets service provider is now available As a Technology Preview, you can now use Custodia, a secrets service provider. Custodia stores or serves as a proxy for secrets, such as keys or passwords. For details, see the upstream documentation at http://custodia.readthedocs.io . (BZ# 1403214 ) Containerized Identity Management server available as Technology Preview The rhel7/ipa-server container image is available as a Technology Preview feature. Note that the rhel7/sssd container image is now fully supported. For details, see https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html-single/using_containerized_identity_management_services . (BZ# 1405325 , BZ#1405326)
| null |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.4_release_notes/technology_previews_authentication_and_interoperability
|
Chapter 4. Cluster access with kubeconfig
|
Chapter 4. Cluster access with kubeconfig Learn about how kubeconfig files are used with MicroShift deployments. CLI tools use kubeconfig files to communicate with the API server of a cluster. These files provide cluster details, IP addresses, and other information needed for authentication. 4.1. Kubeconfig files for configuring cluster access The two categories of kubeconfig files used in MicroShift are local access and remote access. Every time MicroShift starts, a set of kubeconfig files for local and remote access to the API server are generated. These files are generated in the /var/lib/microshift/resources/kubeadmin/ directory using preexisting configuration information. Each access type requires a different authentication certificate signed by different Certificate Authorities (CAs). The generation of multiple kubeconfig files accommodates this need. You can use the appropriate kubeconfig file for the access type needed in each case to provide authentication details. The contents of MicroShift kubeconfig files are determined by either default built-in values or a config.yaml file. Note A kubeconfig file must exist for the cluster to be accessible. The values are applied from built-in default values or a config.yaml , if one was created. Example contents of the kubeconfig files /var/lib/microshift/resources/kubeadmin/ ├── kubeconfig 1 ├── alt-name-1 2 │ └── kubeconfig ├── 1.2.3.4 3 │ └── kubeconfig └── microshift-rhel9 4 └── kubeconfig 1 Local host name. The main IP address of the host is always the default. 2 Subject Alternative Names for API server certificates. 3 DNS name. 4 MicroShift host name. 4.2. Local access kubeconfig file The local access kubeconfig file is written to /var/lib/microshift/resources/kubeadmin/kubeconfig . This kubeconfig file provides access to the API server using localhost . Choose this file when you are connecting the cluster locally. Example contents of kubeconfig for local access clusters: - cluster: certificate-authority-data: <base64 CA> server: https://localhost:6443 The localhost kubeconfig file can only be used from a client connecting to the API server from the same host. The certificates in the file do not work for remote connections. 4.2.1. Accessing the MicroShift cluster locally Use the following procedure to access the MicroShift cluster locally by using a kubeconfig file. Prerequisites You have installed the oc binary. Procedure Optional: to create a ~/.kube/ folder if your Red Hat Enterprise Linux (RHEL) machine does not have one, run the following command: USD mkdir -p ~/.kube/ Copy the generated local access kubeconfig file to the ~/.kube/ directory by running the following command: USD sudo cat /var/lib/microshift/resources/kubeadmin/kubeconfig > ~/.kube/config Update the permissions on your ~/.kube/config file by running the following command: USD chmod go-r ~/.kube/config Verification Verify that MicroShift is running by entering the following command: USD oc get all -A 4.3. Remote access kubeconfig files When a MicroShift cluster connects to the API server from an external source, a certificate with all of the alternative names in the SAN field is used for validation. MicroShift generates a default kubeconfig for external access using the hostname value. The defaults are set in the <node.hostnameOverride> , <node.nodeIP> and api.<dns.baseDomain> parameter values of the default kubeconfig file. The /var/lib/microshift/resources/kubeadmin/<hostname>/kubeconfig file uses the hostname of the machine, or node.hostnameOverride if that option is set, to reach the API server. The CA of the kubeconfig file is able to validate certificates when accessed externally. Example contents of a default kubeconfig file for remote access clusters: - cluster: certificate-authority-data: <base64 CA> server: https://microshift-rhel9:6443 4.3.1. Remote access customization Multiple remote access kubeconfig file values can be generated for accessing the cluster with different IP addresses or host names. An additional kubeconfig file generates for each entry in the apiServer.subjectAltNames parameter. You can copy remote access kubeconfig files from the host during times of IP connectivity and then use them to access the API server from other workstations. 4.4. Generating additional kubeconfig files for remote access You can generate additional kubeconfig files to use if you need more host names or IP addresses than the default remote access file provides. Important You must restart MicroShift for configuration changes to be implemented. Prerequisites You have created a config.yaml for MicroShift. Procedure Optional: You can show the contents of the config.yaml . Run the following command: USD cat /etc/microshift/config.yaml Optional: You can show the contents of the remote-access kubeconfig file. Run the following command: USD cat /var/lib/microshift/resources/kubeadmin/<hostname>/kubeconfig Important Additional remote access kubeconfig files must include one of the server names listed in the Red Hat build of MicroShift config.yaml file. Additional kubeconfig files must also use the same CA for validation. To generate additional kubeconfig files for additional DNS names SANs or external IP addresses, add the entries you need to the apiServer.subjectAltNames field. In the following example, the DNS name used is alt-name-1 and the IP address is 1.2.3.4 . Example config.yaml with additional authentication values dns: baseDomain: example.com node: hostnameOverride: "microshift-rhel9" 1 nodeIP: 10.0.0.1 apiServer: subjectAltNames: - alt-name-1 2 - 1.2.3.4 3 1 Hostname 2 DNS name 3 IP address or range Restart MicroShift to apply configuration changes and auto-generate the kubeconfig files you need by running the following command: USD sudo systemctl restart microshift To check the contents of additional remote-access kubeconfig files, insert the name or IP address as listed in the config.yaml into the cat command. For example, alt-name-1 is used in the following example command: USD cat /var/lib/microshift/resources/kubeadmin/alt-name-1/kubeconfig Choose the kubeconfig file to use that contains the SAN or IP address you want to use to connect your cluster. In this example, the kubeconfig containing`alt-name-1` in the cluster.server field is the correct file. Example contents of an additional kubeconfig file clusters: - cluster: certificate-authority-data: <base64 CA> server: https://alt-name-1:6443 1 1 The /var/lib/microshift/resources/kubeadmin/alt-name-1/kubeconfig file values are from the apiServer.subjectAltNames configuration values. Note All of these parameters are included as common names (CN) and subject alternative names (SAN) in the external serving certificates for the API server. 4.4.1. Opening the firewall for remote access to the MicroShift cluster Use the following procedure to open the firewall so that a remote user can access the MicroShift cluster. This procedure must be completed before a workstation user can access the cluster remotely. For this procedure, user@microshift is the user on the MicroShift host machine and is responsible for setting up that machine so that it can be accessed by a remote user on a separate workstation. Prerequisites You have installed the oc binary. Your account has cluster administration privileges. Procedure As user@microshift on the MicroShift host, open the firewall port for the Kubernetes API server ( 6443/tcp ) by running the following command: [user@microshift]USD sudo firewall-cmd --permanent --zone=public --add-port=6443/tcp && sudo firewall-cmd --reload Verification As user@microshift , verify that MicroShift is running by entering the following command: [user@microshift]USD oc get all -A 4.4.2. Accessing the MicroShift cluster remotely Use the following procedure to access the MicroShift cluster from a remote location by using a kubeconfig file. The user@workstation login is used to access the host machine remotely. The <user> value in the procedure is the name of the user that user@workstation logs in with to the MicroShift host. Prerequisites You have installed the oc binary. The user@microshift has opened the firewall from the local host. Procedure As user@workstation , create a ~/.kube/ folder if your Red Hat Enterprise Linux (RHEL) machine does not have one by running the following command: [user@workstation]USD mkdir -p ~/.kube/ As user@workstation , set a variable for the hostname of your MicroShift host by running the following command: [user@workstation]USD MICROSHIFT_MACHINE=<name or IP address of MicroShift machine> As user@workstation , copy the generated kubeconfig file that contains the host name or IP address you want to connect with from the RHEL machine running MicroShift to your local machine by running the following command: [user@workstation]USD ssh <user>@USDMICROSHIFT_MACHINE "sudo cat /var/lib/microshift/resources/kubeadmin/USDMICROSHIFT_MACHINE/kubeconfig" > ~/.kube/config Note To generate the kubeconfig files for this step, see Generating additional kubeconfig files for remote access . As user@workstation , update the permissions on your ~/.kube/config file by running the following command: USD chmod go-r ~/.kube/config Verification As user@workstation , verify that MicroShift is running by entering the following command: [user@workstation]USD oc get all -A
|
[
"/var/lib/microshift/resources/kubeadmin/ ├── kubeconfig 1 ├── alt-name-1 2 │ └── kubeconfig ├── 1.2.3.4 3 │ └── kubeconfig └── microshift-rhel9 4 └── kubeconfig",
"clusters: - cluster: certificate-authority-data: <base64 CA> server: https://localhost:6443",
"mkdir -p ~/.kube/",
"sudo cat /var/lib/microshift/resources/kubeadmin/kubeconfig > ~/.kube/config",
"chmod go-r ~/.kube/config",
"oc get all -A",
"clusters: - cluster: certificate-authority-data: <base64 CA> server: https://microshift-rhel9:6443",
"cat /etc/microshift/config.yaml",
"cat /var/lib/microshift/resources/kubeadmin/<hostname>/kubeconfig",
"dns: baseDomain: example.com node: hostnameOverride: \"microshift-rhel9\" 1 nodeIP: 10.0.0.1 apiServer: subjectAltNames: - alt-name-1 2 - 1.2.3.4 3",
"sudo systemctl restart microshift",
"cat /var/lib/microshift/resources/kubeadmin/alt-name-1/kubeconfig",
"clusters: - cluster: certificate-authority-data: <base64 CA> server: https://alt-name-1:6443 1",
"[user@microshift]USD sudo firewall-cmd --permanent --zone=public --add-port=6443/tcp && sudo firewall-cmd --reload",
"[user@microshift]USD oc get all -A",
"[user@workstation]USD mkdir -p ~/.kube/",
"[user@workstation]USD MICROSHIFT_MACHINE=<name or IP address of MicroShift machine>",
"[user@workstation]USD ssh <user>@USDMICROSHIFT_MACHINE \"sudo cat /var/lib/microshift/resources/kubeadmin/USDMICROSHIFT_MACHINE/kubeconfig\" > ~/.kube/config",
"chmod go-r ~/.kube/config",
"[user@workstation]USD oc get all -A"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_microshift/4.18/html/configuring/microshift-kubeconfig
|
Distributed tracing
|
Distributed tracing OpenShift Container Platform 4.15 Configuring and using distributed tracing in OpenShift Container Platform Red Hat OpenShift Documentation Team
|
[
"oc edit configmap tempo-operator-manager-config -n openshift-tempo-operator 1",
"data: controller_manager_config.yaml: | featureGates: httpEncryption: false grpcEncryption: false builtInCertManagement: enabled: false",
"oc rollout restart deployment.apps/tempo-operator-controller -n openshift-tempo-operator",
"kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 archiveSize: 20 storageConfig: local: path: /home/user/images mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.13 packages: - name: tempo-product channels: - name: stable additionalImages: - name: registry.redhat.io/rhosdt/tempo-rhel8@sha256:e4295f837066efb05bcc5897f31eb2bdbd81684a8c59d6f9498dd3590c62c12a - name: registry.redhat.io/rhosdt/tempo-gateway-rhel8@sha256:b62f5cedfeb5907b638f14ca6aaeea50f41642980a8a6f87b7061e88d90fac23 - name: registry.redhat.io/rhosdt/tempo-gateway-opa-rhel8@sha256:8cd134deca47d6817b26566e272e6c3f75367653d589f5c90855c59b2fab01e9 - name: registry.redhat.io/rhosdt/tempo-query-rhel8@sha256:0da43034f440b8258a48a0697ba643b5643d48b615cdb882ac7f4f1f80aad08e",
"oc edit configmap tempo-operator-manager-config -n openshift-tempo-operator 1",
"data: controller_manager_config.yaml: | featureGates: httpEncryption: false grpcEncryption: false builtInCertManagement: enabled: false",
"oc rollout restart deployment.apps/tempo-operator-controller -n openshift-tempo-operator",
"kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 archiveSize: 20 storageConfig: local: path: /home/user/images mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.13 packages: - name: tempo-product channels: - name: stable additionalImages: - name: registry.redhat.io/rhosdt/tempo-rhel8@sha256:e4295f837066efb05bcc5897f31eb2bdbd81684a8c59d6f9498dd3590c62c12a - name: registry.redhat.io/rhosdt/tempo-gateway-rhel8@sha256:b62f5cedfeb5907b638f14ca6aaeea50f41642980a8a6f87b7061e88d90fac23 - name: registry.redhat.io/rhosdt/tempo-gateway-opa-rhel8@sha256:8cd134deca47d6817b26566e272e6c3f75367653d589f5c90855c59b2fab01e9 - name: registry.redhat.io/rhosdt/tempo-query-rhel8@sha256:0da43034f440b8258a48a0697ba643b5643d48b615cdb882ac7f4f1f80aad08e",
"oc edit configmap tempo-operator-manager-config -n openshift-tempo-operator 1",
"data: controller_manager_config.yaml: | featureGates: httpEncryption: false grpcEncryption: false builtInCertManagement: enabled: false",
"oc rollout restart deployment.apps/tempo-operator-controller -n openshift-tempo-operator",
"kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 archiveSize: 20 storageConfig: local: path: /home/user/images mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.13 packages: - name: tempo-product channels: - name: stable additionalImages: - name: registry.redhat.io/rhosdt/tempo-rhel8@sha256:e4295f837066efb05bcc5897f31eb2bdbd81684a8c59d6f9498dd3590c62c12a - name: registry.redhat.io/rhosdt/tempo-gateway-rhel8@sha256:b62f5cedfeb5907b638f14ca6aaeea50f41642980a8a6f87b7061e88d90fac23 - name: registry.redhat.io/rhosdt/tempo-gateway-opa-rhel8@sha256:8cd134deca47d6817b26566e272e6c3f75367653d589f5c90855c59b2fab01e9 - name: registry.redhat.io/rhosdt/tempo-query-rhel8@sha256:0da43034f440b8258a48a0697ba643b5643d48b615cdb882ac7f4f1f80aad08e",
"spec: mode: deployment config: | exporters: jaeger: endpoint: jaeger-production-collector-headless.tracing-system.svc:14250 ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\"",
"spec: mode: deployment config: | exporters: jaeger: endpoint: jaeger-production-collector-headless.tracing-system.svc:14250 tls: ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\"",
"oc login --username=<your_username>",
"oc apply -f - << EOF apiVersion: project.openshift.io/v1 kind: Project metadata: labels: kubernetes.io/metadata.name: openshift-tempo-operator openshift.io/cluster-monitoring: \"true\" name: openshift-tempo-operator EOF",
"oc apply -f - << EOF apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-tempo-operator namespace: openshift-tempo-operator spec: upgradeStrategy: Default EOF",
"oc apply -f - << EOF apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: tempo-product namespace: openshift-tempo-operator spec: channel: stable installPlanApproval: Automatic name: tempo-product source: redhat-operators sourceNamespace: openshift-marketplace EOF",
"oc get csv -n openshift-tempo-operator",
"apiVersion: v1 kind: Secret metadata: name: minio-test stringData: endpoint: http://minio.minio.svc:9000 bucket: tempo access_key_id: tempo access_key_secret: <secret> type: Opaque",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: sample namespace: <project_of_tempostack_instance> spec: storageSize: <value>Gi 1 storage: secret: 2 name: <secret_name> 3 type: <secret_provider> 4 tls: 5 enabled: true caName: <ca_certificate_configmap_name> 6 template: queryFrontend: jaegerQuery: enabled: true ingress: route: termination: edge type: route resources: 7 total: limits: memory: <value>Gi cpu: <value>m",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: simplest namespace: <project_of_tempostack_instance> spec: storageSize: 1Gi storage: 1 secret: name: minio-test type: s3 resources: total: limits: memory: 2Gi cpu: 2000m template: queryFrontend: jaegerQuery: 2 enabled: true ingress: route: termination: edge type: route",
"oc login --username=<your_username>",
"oc apply -f - << EOF apiVersion: project.openshift.io/v1 kind: Project metadata: name: <project_of_tempostack_instance> EOF",
"oc apply -f - << EOF <object_storage_secret> EOF",
"apiVersion: v1 kind: Secret metadata: name: minio-test stringData: endpoint: http://minio.minio.svc:9000 bucket: tempo access_key_id: tempo access_key_secret: <secret> type: Opaque",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: sample namespace: <project_of_tempostack_instance> spec: storageSize: <value>Gi 1 storage: secret: 2 name: <secret_name> 3 type: <secret_provider> 4 tls: 5 enabled: true caName: <ca_certificate_configmap_name> 6 template: queryFrontend: jaegerQuery: enabled: true ingress: route: termination: edge type: route resources: 7 total: limits: memory: <value>Gi cpu: <value>m",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: simplest namespace: <project_of_tempostack_instance> spec: storageSize: 1Gi storage: 1 secret: name: minio-test type: s3 resources: total: limits: memory: 2Gi cpu: 2000m template: queryFrontend: jaegerQuery: 2 enabled: true ingress: route: termination: edge type: route",
"oc apply -f - << EOF <tempostack_cr> EOF",
"oc get tempostacks.tempo.grafana.com simplest -o yaml",
"oc get pods",
"oc get route",
"apiVersion: v1 kind: Secret metadata: name: minio-test stringData: endpoint: http://minio.minio.svc:9000 bucket: tempo access_key_id: tempo access_key_secret: <secret> type: Opaque",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoMonolithic metadata: name: <metadata_name> namespace: <project_of_tempomonolithic_instance> spec: storage: traces: backend: <supported_storage_type> 1 size: <value>Gi 2 s3: 3 secret: <secret_name> 4 tls: 5 enabled: true caName: <ca_certificate_configmap_name> 6 jaegerui: enabled: true 7 route: enabled: true 8 resources: 9 total: limits: memory: <value>Gi cpu: <value>m",
"oc login --username=<your_username>",
"oc apply -f - << EOF apiVersion: project.openshift.io/v1 kind: Project metadata: name: <project_of_tempomonolithic_instance> EOF",
"oc apply -f - << EOF <object_storage_secret> EOF",
"apiVersion: v1 kind: Secret metadata: name: minio-test stringData: endpoint: http://minio.minio.svc:9000 bucket: tempo access_key_id: tempo access_key_secret: <secret> type: Opaque",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoMonolithic metadata: name: <metadata_name> namespace: <project_of_tempomonolithic_instance> spec: storage: traces: backend: <supported_storage_type> 1 size: <value>Gi 2 s3: 3 secret: <secret_name> 4 tls: 5 enabled: true caName: <ca_certificate_configmap_name> 6 jaegerui: enabled: true 7 route: enabled: true 8 resources: 9 total: limits: memory: <value>Gi cpu: <value>m",
"oc apply -f - << EOF <tempomonolithic_cr> EOF",
"oc get tempomonolithic.tempo.grafana.com <metadata_name_of_tempomonolithic_cr> -o yaml",
"oc get pods",
"oc get route",
"{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"Federated\": \"arn:aws:iam::USD{<aws_account_id>}:oidc-provider/USD{<oidc_provider>}\" 1 }, \"Action\": \"sts:AssumeRoleWithWebIdentity\", \"Condition\": { \"StringEquals\": { \"USD{OIDC_PROVIDER}:sub\": [ \"system:serviceaccount:USD{<openshift_project_for_tempostack>}:tempo-USD{<tempostack_cr_name>}\" 2 \"system:serviceaccount:USD{<openshift_project_for_tempostack>}:tempo-USD{<tempostack_cr_name>}-query-frontend\" ] } } } ] }",
"aws iam create-role --role-name \"tempo-s3-access\" --assume-role-policy-document \"file:///tmp/trust.json\" --query Role.Arn --output text",
"aws iam attach-role-policy --role-name \"tempo-s3-access\" --policy-arn \"arn:aws:iam::aws:policy/AmazonS3FullAccess\"",
"apiVersion: v1 kind: Secret metadata: name: minio-test stringData: bucket: <s3_bucket_name> region: <s3_region> role_arn: <s3_role_arn> type: Opaque",
"ibmcloud resource service-key-create <tempo_bucket> Writer --instance-name <tempo_bucket> --parameters '{\"HMAC\":true}'",
"oc -n <namespace> create secret generic <ibm_cos_secret> --from-literal=bucket=\"<tempo_bucket>\" --from-literal=endpoint=\"<ibm_bucket_endpoint>\" --from-literal=access_key_id=\"<ibm_bucket_access_key>\" --from-literal=access_key_secret=\"<ibm_bucket_secret_key>\"",
"apiVersion: v1 kind: Secret metadata: name: <ibm_cos_secret> stringData: bucket: <tempo_bucket> endpoint: <ibm_bucket_endpoint> access_key_id: <ibm_bucket_access_key> access_key_secret: <ibm_bucket_secret_key> type: Opaque",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack spec: storage: secret: name: <ibm_cos_secret> 1 type: s3",
"apiVersion: tempo.grafana.com/v1alpha1 1 kind: TempoStack 2 metadata: 3 name: <name> 4 spec: 5 storage: {} 6 resources: {} 7 replicationFactor: 1 8 retention: {} 9 template: distributor: {} 10 ingester: {} 11 compactor: {} 12 querier: {} 13 queryFrontend: {} 14 gateway: {} 15 limits: 16 global: ingestion: {} 17 query: {} 18 observability: 19 grafana: {} metrics: {} tracing: {} search: {} 20 managementState: managed 21",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: simplest spec: storage: secret: name: minio type: s3 storageSize: 200M resources: total: limits: memory: 2Gi cpu: 2000m template: queryFrontend: jaegerQuery: enabled: true ingress: route: termination: edge type: route",
"kind: OpenTelemetryCollector apiVersion: opentelemetry.io/v1alpha1 metadata: name: otel spec: mode: deployment observability: metrics: enableMetrics: true 1 config: | connectors: spanmetrics: 2 metrics_flush_interval: 15s receivers: otlp: 3 protocols: grpc: http: exporters: prometheus: 4 endpoint: 0.0.0.0:8889 add_metric_suffixes: false resource_to_telemetry_conversion: enabled: true # by default resource attributes are dropped otlp: endpoint: \"tempo-simplest-distributor:4317\" tls: insecure: true service: pipelines: traces: receivers: [otlp] exporters: [otlp, spanmetrics] 5 metrics: receivers: [spanmetrics] 6 exporters: [prometheus]",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: redmetrics spec: storage: secret: name: minio-test type: s3 storageSize: 1Gi template: gateway: enabled: false queryFrontend: jaegerQuery: enabled: true monitorTab: enabled: true 1 prometheusEndpoint: https://thanos-querier.openshift-monitoring.svc.cluster.local:9091 2 redMetricsNamespace: \"\" 3 ingress: type: route",
"apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: span-red spec: groups: - name: server-side-latency rules: - alert: SpanREDFrontendAPIRequestLatency expr: histogram_quantile(0.95, sum(rate(duration_bucket{service_name=\"frontend\", span_kind=\"SPAN_KIND_SERVER\"}[5m])) by (le, service_name, span_name)) > 2000 1 labels: severity: Warning annotations: summary: \"High request latency on {{USDlabels.service_name}} and {{USDlabels.span_name}}\" description: \"{{USDlabels.instance}} has 95th request latency above 2s (current value: {{USDvalue}}s)\"",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack spec: template: distributor: tls: enabled: true 1 certName: <tls_secret> 2 caName: <ca_name> 3",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack spec: template: distributor: tls: enabled: true 1",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoMonolithic spec: ingestion: otlp: grpc: tls: enabled: true 1 certName: <tls_secret> 2 caName: <ca_name> 3",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoMonolithic spec: ingestion: otlp: grpc: tls: enabled: true http: tls: enabled: true 1",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: simplest namespace: chainsaw-multitenancy spec: storage: secret: name: minio type: s3 storageSize: 1Gi resources: total: limits: memory: 2Gi cpu: 2000m tenants: mode: openshift 1 authentication: 2 - tenantName: dev 3 tenantId: \"1610b0c3-c509-4592-a256-a1871353dbfa\" 4 - tenantName: prod tenantId: \"1610b0c3-c509-4592-a256-a1871353dbfb\" template: gateway: enabled: true 5 queryFrontend: jaegerQuery: enabled: true",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: tempostack-traces-reader rules: - apiGroups: - 'tempo.grafana.com' resources: 1 - dev - prod resourceNames: - traces verbs: - 'get' 2 --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: tempostack-traces-reader roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: tempostack-traces-reader subjects: - kind: Group apiGroup: rbac.authorization.k8s.io name: system:authenticated 3",
"apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector 1 namespace: otel --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: tempostack-traces-write rules: - apiGroups: - 'tempo.grafana.com' resources: 2 - dev resourceNames: - traces verbs: - 'create' 3 --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: tempostack-traces roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: tempostack-traces-write subjects: - kind: ServiceAccount name: otel-collector namespace: otel",
"apiVersion: opentelemetry.io/v1alpha1 kind: OpenTelemetryCollector metadata: name: cluster-collector namespace: tracing-system spec: mode: deployment serviceAccount: otel-collector config: | extensions: bearertokenauth: filename: \"/var/run/secrets/kubernetes.io/serviceaccount/token\" exporters: otlp/dev: 1 endpoint: tempo-simplest-gateway.tempo.svc.cluster.local:8090 tls: insecure: false ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\" auth: authenticator: bearertokenauth headers: X-Scope-OrgID: \"dev\" otlphttp/dev: 2 endpoint: https://tempo-simplest-gateway.chainsaw-multitenancy.svc.cluster.local:8080/api/traces/v1/dev tls: insecure: false ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\" auth: authenticator: bearertokenauth headers: X-Scope-OrgID: \"dev\" service: extensions: [bearertokenauth] pipelines: traces: exporters: [otlp/dev] 3",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: <name> spec: observability: metrics: createServiceMonitors: true",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: <name> spec: observability: metrics: createPrometheusRules: true",
"oc adm must-gather --image=ghcr.io/grafana/tempo-operator/must-gather -- /usr/bin/must-gather --operator-namespace <operator_namespace> 1",
"oc login --username=<your_username>",
"oc get deployments -n <project_of_tempostack_instance>",
"oc delete tempo <tempostack_instance_name> -n <project_of_tempostack_instance>",
"oc get deployments -n <project_of_tempostack_instance>",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: MyConfigFile spec: strategy: production 1",
"oc login --username=<NAMEOFUSER> https://<HOSTNAME>:8443",
"oc new-project tracing-system",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-all-in-one-inmemory",
"oc create -n tracing-system -f jaeger.yaml",
"oc get pods -n tracing-system -w",
"NAME READY STATUS RESTARTS AGE jaeger-all-in-one-inmemory-cdff7897b-qhfdx 2/2 Running 0 24s",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-production namespace: spec: strategy: production ingress: security: oauth-proxy storage: type: elasticsearch elasticsearch: nodeCount: 3 redundancyPolicy: SingleRedundancy esIndexCleaner: enabled: true numberOfDays: 7 schedule: 55 23 * * * esRollover: schedule: '*/30 * * * *'",
"oc login --username=<NAMEOFUSER> https://<HOSTNAME>:8443",
"oc new-project tracing-system",
"oc create -n tracing-system -f jaeger-production.yaml",
"oc get pods -n tracing-system -w",
"NAME READY STATUS RESTARTS AGE elasticsearch-cdm-jaegersystemjaegerproduction-1-6676cf568gwhlw 2/2 Running 0 10m elasticsearch-cdm-jaegersystemjaegerproduction-2-bcd4c8bf5l6g6w 2/2 Running 0 10m elasticsearch-cdm-jaegersystemjaegerproduction-3-844d6d9694hhst 2/2 Running 0 10m jaeger-production-collector-94cd847d-jwjlj 1/1 Running 3 8m32s jaeger-production-query-5cbfbd499d-tv8zf 3/3 Running 3 8m32s",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-streaming spec: strategy: streaming collector: options: kafka: producer: topic: jaeger-spans brokers: my-cluster-kafka-brokers.kafka:9092 1 storage: type: elasticsearch ingester: options: kafka: consumer: topic: jaeger-spans brokers: my-cluster-kafka-brokers.kafka:9092",
"oc login --username=<NAMEOFUSER> https://<HOSTNAME>:8443",
"oc new-project tracing-system",
"oc create -n tracing-system -f jaeger-streaming.yaml",
"oc get pods -n tracing-system -w",
"NAME READY STATUS RESTARTS AGE elasticsearch-cdm-jaegersystemjaegerstreaming-1-697b66d6fcztcnn 2/2 Running 0 5m40s elasticsearch-cdm-jaegersystemjaegerstreaming-2-5f4b95c78b9gckz 2/2 Running 0 5m37s elasticsearch-cdm-jaegersystemjaegerstreaming-3-7b6d964576nnz97 2/2 Running 0 5m5s jaeger-streaming-collector-6f6db7f99f-rtcfm 1/1 Running 0 80s jaeger-streaming-entity-operator-6b6d67cc99-4lm9q 3/3 Running 2 2m18s jaeger-streaming-ingester-7d479847f8-5h8kc 1/1 Running 0 80s jaeger-streaming-kafka-0 2/2 Running 0 3m1s jaeger-streaming-query-65bf5bb854-ncnc7 3/3 Running 0 80s jaeger-streaming-zookeeper-0 2/2 Running 0 3m39s",
"oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443",
"export JAEGER_URL=USD(oc get route -n tracing-system jaeger -o jsonpath='{.spec.host}')",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: name spec: strategy: <deployment_strategy> allInOne: options: {} resources: {} agent: options: {} resources: {} collector: options: {} resources: {} sampling: options: {} storage: type: options: {} query: options: {} resources: {} ingester: options: {} resources: {} options: {}",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-all-in-one-inmemory",
"collector: replicas:",
"spec: collector: options: {}",
"options: collector: num-workers:",
"options: collector: queue-size:",
"options: kafka: producer: topic: jaeger-spans",
"options: kafka: producer: brokers: my-cluster-kafka-brokers.kafka:9092",
"options: log-level:",
"options: otlp: enabled: true grpc: host-port: 4317 max-connection-age: 0s max-connection-age-grace: 0s max-message-size: 4194304 tls: enabled: false cert: /path/to/cert.crt cipher-suites: \"TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256\" client-ca: /path/to/cert.ca reload-interval: 0s min-version: 1.2 max-version: 1.3",
"options: otlp: enabled: true http: cors: allowed-headers: [<header-name>[, <header-name>]*] allowed-origins: * host-port: 4318 max-connection-age: 0s max-connection-age-grace: 0s max-message-size: 4194304 read-timeout: 0s read-header-timeout: 2s idle-timeout: 0s tls: enabled: false cert: /path/to/cert.crt cipher-suites: \"TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256\" client-ca: /path/to/cert.ca reload-interval: 0s min-version: 1.2 max-version: 1.3",
"spec: sampling: options: {} default_strategy: service_strategy:",
"default_strategy: type: service_strategy: type:",
"default_strategy: param: service_strategy: param:",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: with-sampling spec: sampling: options: default_strategy: type: probabilistic param: 0.5 service_strategies: - service: alpha type: probabilistic param: 0.8 operation_strategies: - operation: op1 type: probabilistic param: 0.2 - operation: op2 type: probabilistic param: 0.4 - service: beta type: ratelimiting param: 5",
"spec: sampling: options: default_strategy: type: probabilistic param: 1",
"spec: storage: type:",
"storage: secretname:",
"storage: options: {}",
"storage: esIndexCleaner: enabled:",
"storage: esIndexCleaner: numberOfDays:",
"storage: esIndexCleaner: schedule:",
"elasticsearch: properties: doNotProvision:",
"elasticsearch: properties: name:",
"elasticsearch: nodeCount:",
"elasticsearch: resources: requests: cpu:",
"elasticsearch: resources: requests: memory:",
"elasticsearch: resources: limits: cpu:",
"elasticsearch: resources: limits: memory:",
"elasticsearch: redundancyPolicy:",
"elasticsearch: useCertManagement:",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch elasticsearch: nodeCount: 3 resources: requests: cpu: 1 memory: 16Gi limits: memory: 16Gi",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch elasticsearch: nodeCount: 1 storage: 1 storageClassName: gp2 size: 5Gi resources: requests: cpu: 200m memory: 4Gi limits: memory: 4Gi redundancyPolicy: ZeroRedundancy",
"es: server-urls:",
"es: max-doc-count:",
"es: max-num-spans:",
"es: max-span-age:",
"es: sniffer:",
"es: sniffer-tls-enabled:",
"es: timeout:",
"es: username:",
"es: password:",
"es: version:",
"es: num-replicas:",
"es: num-shards:",
"es: create-index-templates:",
"es: index-prefix:",
"es: bulk: actions:",
"es: bulk: flush-interval:",
"es: bulk: size:",
"es: bulk: workers:",
"es: tls: ca:",
"es: tls: cert:",
"es: tls: enabled:",
"es: tls: key:",
"es: tls: server-name:",
"es: token-file:",
"es-archive: bulk: actions:",
"es-archive: bulk: flush-interval:",
"es-archive: bulk: size:",
"es-archive: bulk: workers:",
"es-archive: create-index-templates:",
"es-archive: enabled:",
"es-archive: index-prefix:",
"es-archive: max-doc-count:",
"es-archive: max-num-spans:",
"es-archive: max-span-age:",
"es-archive: num-replicas:",
"es-archive: num-shards:",
"es-archive: password:",
"es-archive: server-urls:",
"es-archive: sniffer:",
"es-archive: sniffer-tls-enabled:",
"es-archive: timeout:",
"es-archive: tls: ca:",
"es-archive: tls: cert:",
"es-archive: tls: enabled:",
"es-archive: tls: key:",
"es-archive: tls: server-name:",
"es-archive: token-file:",
"es-archive: username:",
"es-archive: version:",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch options: es: server-urls: https://quickstart-es-http.default.svc:9200 index-prefix: my-prefix tls: ca: /es/certificates/ca.crt secretName: tracing-secret volumeMounts: - name: certificates mountPath: /es/certificates/ readOnly: true volumes: - name: certificates secret: secretName: quickstart-es-http-certs-public",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch options: es: server-urls: https://quickstart-es-http.default.svc:9200 1 index-prefix: my-prefix tls: 2 ca: /es/certificates/ca.crt secretName: tracing-secret 3 volumeMounts: 4 - name: certificates mountPath: /es/certificates/ readOnly: true volumes: - name: certificates secret: secretName: quickstart-es-http-certs-public",
"apiVersion: logging.openshift.io/v1 kind: Elasticsearch metadata: annotations: logging.openshift.io/elasticsearch-cert-management: \"true\" logging.openshift.io/elasticsearch-cert.jaeger-custom-es: \"user.jaeger\" logging.openshift.io/elasticsearch-cert.curator-custom-es: \"system.logging.curator\" name: custom-es spec: managementState: Managed nodeSpec: resources: limits: memory: 16Gi requests: cpu: 1 memory: 16Gi nodes: - nodeCount: 3 proxyResources: {} resources: {} roles: - master - client - data storage: {} redundancyPolicy: ZeroRedundancy",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-prod spec: strategy: production storage: type: elasticsearch elasticsearch: name: custom-es doNotProvision: true useCertManagement: true",
"spec: query: replicas:",
"spec: query: options: {}",
"options: log-level:",
"options: query: base-path:",
"apiVersion: jaegertracing.io/v1 kind: \"Jaeger\" metadata: name: \"my-jaeger\" spec: strategy: allInOne allInOne: options: log-level: debug query: base-path: /jaeger",
"spec: ingester: options: {}",
"options: deadlockInterval:",
"options: kafka: consumer: topic:",
"options: kafka: consumer: brokers:",
"options: log-level:",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-streaming spec: strategy: streaming collector: options: kafka: producer: topic: jaeger-spans brokers: my-cluster-kafka-brokers.kafka:9092 ingester: options: kafka: consumer: topic: jaeger-spans brokers: my-cluster-kafka-brokers.kafka:9092 ingester: deadlockInterval: 5 storage: type: elasticsearch options: es: server-urls: http://elasticsearch:9200",
"apiVersion: apps/v1 kind: Deployment metadata: name: myapp annotations: \"sidecar.jaegertracing.io/inject\": \"true\" 1 spec: selector: matchLabels: app: myapp template: metadata: labels: app: myapp spec: containers: - name: myapp image: acme/myapp:myversion",
"apiVersion: apps/v1 kind: StatefulSet metadata: name: example-statefulset namespace: example-ns labels: app: example-app spec: spec: containers: - name: example-app image: acme/myapp:myversion ports: - containerPort: 8080 protocol: TCP - name: jaeger-agent image: registry.redhat.io/distributed-tracing/jaeger-agent-rhel7:<version> # The agent version must match the Operator version imagePullPolicy: IfNotPresent ports: - containerPort: 5775 name: zk-compact-trft protocol: UDP - containerPort: 5778 name: config-rest protocol: TCP - containerPort: 6831 name: jg-compact-trft protocol: UDP - containerPort: 6832 name: jg-binary-trft protocol: UDP - containerPort: 14271 name: admin-http protocol: TCP args: - --reporter.grpc.host-port=dns:///jaeger-collector-headless.example-ns:14250 - --reporter.type=grpc",
"oc login --username=<your_username>",
"oc login --username=<NAMEOFUSER>",
"oc get deployments -n <jaeger-project>",
"oc get deployments -n openshift-operators",
"oc get deployments -n openshift-operators",
"NAME READY UP-TO-DATE AVAILABLE AGE elasticsearch-operator 1/1 1 1 93m jaeger-operator 1/1 1 1 49m jaeger-test 1/1 1 1 7m23s jaeger-test2 1/1 1 1 6m48s tracing1 1/1 1 1 7m8s tracing2 1/1 1 1 35m",
"oc delete jaeger <deployment-name> -n <jaeger-project>",
"oc delete jaeger tracing2 -n openshift-operators",
"oc get deployments -n <jaeger-project>",
"oc get deployments -n openshift-operators",
"NAME READY UP-TO-DATE AVAILABLE AGE elasticsearch-operator 1/1 1 1 94m jaeger-operator 1/1 1 1 50m jaeger-test 1/1 1 1 8m14s jaeger-test2 1/1 1 1 7m39s tracing1 1/1 1 1 7m59s"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html-single/distributed_tracing/index
|
25.3. Basic Configuration of Rsyslog
|
25.3. Basic Configuration of Rsyslog The main configuration file for rsyslog is /etc/rsyslog.conf . Here, you can specify global directives , modules , and rules that consist of filter and action parts. Also, you can add comments in the form of text following a hash sign ( # ). 25.3.1. Filters A rule is specified by a filter part, which selects a subset of syslog messages, and an action part, which specifies what to do with the selected messages. To define a rule in your /etc/rsyslog.conf configuration file, define both, a filter and an action, on one line and separate them with one or more spaces or tabs. rsyslog offers various ways to filter syslog messages according to selected properties. The available filtering methods can be divided into Facility/Priority-based , Property-based , and Expression-based filters. Facility/Priority-based filters The most used and well-known way to filter syslog messages is to use the facility/priority-based filters which filter syslog messages based on two conditions: facility and priority separated by a dot. To create a selector, use the following syntax: where: FACILITY specifies the subsystem that produces a specific syslog message. For example, the mail subsystem handles all mail-related syslog messages. FACILITY can be represented by one of the following keywords (or by a numerical code): kern (0), user (1), mail (2), daemon (3), auth (4), syslog (5), lpr (6), news (7), uucp (8), cron (9), authpriv (10), ftp (11), and local0 through local7 (16 - 23). PRIORITY specifies a priority of a syslog message. PRIORITY can be represented by one of the following keywords (or by a number): debug (7), info (6), notice (5), warning (4), err (3), crit (2), alert (1), and emerg (0). The aforementioned syntax selects syslog messages with the defined or higher priority. By preceding any priority keyword with an equal sign ( = ), you specify that only syslog messages with the specified priority will be selected. All other priorities will be ignored. Conversely, preceding a priority keyword with an exclamation mark ( ! ) selects all syslog messages except those with the defined priority. In addition to the keywords specified above, you may also use an asterisk ( * ) to define all facilities or priorities (depending on where you place the asterisk, before or after the comma). Specifying the priority keyword none serves for facilities with no given priorities. Both facility and priority conditions are case-insensitive. To define multiple facilities and priorities, separate them with a comma ( , ). To define multiple selectors on one line, separate them with a semi-colon ( ; ). Note that each selector in the selector field is capable of overwriting the preceding ones, which can exclude some priorities from the pattern. Example 25.1. Facility/Priority-based Filters The following are a few examples of simple facility/priority-based filters that can be specified in /etc/rsyslog.conf . To select all kernel syslog messages with any priority, add the following text into the configuration file: To select all mail syslog messages with priority crit and higher, use this form: To select all cron syslog messages except those with the info or debug priority, set the configuration in the following form: Property-based filters Property-based filters let you filter syslog messages by any property, such as timegenerated or syslogtag . For more information on properties, see the section called "Properties" . You can compare each of the specified properties to a particular value using one of the compare-operations listed in Table 25.1, "Property-based compare-operations" . Both property names and compare operations are case-sensitive. Property-based filter must start with a colon ( : ). To define the filter, use the following syntax: where: The PROPERTY attribute specifies the desired property. The optional exclamation point ( ! ) negates the output of the compare-operation. Other Boolean operators are currently not supported in property-based filters. The COMPARE_OPERATION attribute specifies one of the compare-operations listed in Table 25.1, "Property-based compare-operations" . The STRING attribute specifies the value that the text provided by the property is compared to. This value must be enclosed in quotation marks. To escape certain character inside the string (for example a quotation mark ( " )), use the backslash character ( \ ). Table 25.1. Property-based compare-operations Compare-operation Description contains Checks whether the provided string matches any part of the text provided by the property. To perform case-insensitive comparisons, use contains_i . isequal Compares the provided string against all of the text provided by the property. These two values must be exactly equal to match. startswith Checks whether the provided string is found exactly at the beginning of the text provided by the property. To perform case-insensitive comparisons, use startswith_i . regex Compares the provided POSIX BRE (Basic Regular Expression) against the text provided by the property. ereregex Compares the provided POSIX ERE (Extended Regular Expression) regular expression against the text provided by the property. isempty Checks if the property is empty. The value is discarded. This is especially useful when working with normalized data, where some fields may be populated based on normalization result. Example 25.2. Property-based Filters The following are a few examples of property-based filters that can be specified in /etc/rsyslog.conf . To select syslog messages which contain the string error in their message text, use: The following filter selects syslog messages received from the host name host1 : To select syslog messages which do not contain any mention of the words fatal and error with any or no text between them (for example, fatal lib error ), type: Expression-based filters Expression-based filters select syslog messages according to defined arithmetic, Boolean or string operations. Expression-based filters use rsyslog 's own scripting language called RainerScript to build complex filters. The basic syntax of expression-based filter looks as follows: where: The EXPRESSION attribute represents an expression to be evaluated, for example: USDmsg startswith 'DEVNAME' or USDsyslogfacility-text == 'local0' . You can specify more than one expression in a single filter by using and and or operators. The ACTION attribute represents an action to be performed if the expression returns the value true . This can be a single action, or an arbitrary complex script enclosed in curly braces. Expression-based filters are indicated by the keyword if at the start of a new line. The then keyword separates the EXPRESSION from the ACTION . Optionally, you can employ the else keyword to specify what action is to be performed in case the condition is not met. With expression-based filters, you can nest the conditions by using a script enclosed in curly braces as in Example 25.3, "Expression-based Filters" . The script allows you to use facility/priority-based filters inside the expression. On the other hand, property-based filters are not recommended here. RainerScript supports regular expressions with specialized functions re_match() and re_extract() . Example 25.3. Expression-based Filters The following expression contains two nested conditions. The log files created by a program called prog1 are split into two files based on the presence of the "test" string in the message. See the section called "Online Documentation" for more examples of various expression-based filters. RainerScript is the basis for rsyslog 's new configuration format, see Section 25.4, "Using the New Configuration Format"
|
[
"FACILITY . PRIORITY",
"kern.*",
"mail.crit",
"cron.!info,!debug",
": PROPERTY , [!] COMPARE_OPERATION , \" STRING \"",
":msg, contains, \"error\"",
":hostname, isequal, \"host1\"",
":msg, !regex, \"fatal .* error\"",
"if EXPRESSION then ACTION else ACTION",
"if USDprogramname == 'prog1' then { action(type=\"omfile\" file=\"/var/log/prog1.log\") if USDmsg contains 'test' then action(type=\"omfile\" file=\"/var/log/prog1test.log\") else action(type=\"omfile\" file=\"/var/log/prog1notest.log\") }"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s1-basic_configuration_of_rsyslog
|
Providing feedback on Red Hat build of OpenJDK documentation
|
Providing feedback on Red Hat build of OpenJDK documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team.
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/troubleshooting_red_hat_build_of_openjdk_8_for_windows/providing-direct-documentation-feedback_openjdk
|
Preface
|
Preface Preface
| null |
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/using_streams_for_apache_kafka_on_rhel_with_zookeeper/preface
|
Chapter 14. Configuring interface-level network sysctls
|
Chapter 14. Configuring interface-level network sysctls In Linux, sysctl allows an administrator to modify kernel parameters at runtime. You can modify interface-level network sysctls using the tuning Container Network Interface (CNI) meta plugin. The tuning CNI meta plugin operates in a chain with a main CNI plugin as illustrated. The main CNI plugin assigns the interface and passes this to the tuning CNI meta plugin at runtime. You can change some sysctls and several interface attributes (promiscuous mode, all-multicast mode, MTU, and MAC address) in the network namespace by using the tuning CNI meta plugin. In the tuning CNI meta plugin configuration, the interface name is represented by the IFNAME token, and is replaced with the actual name of the interface at runtime. Note In OpenShift Container Platform, the tuning CNI meta plugin only supports changing interface-level network sysctls. 14.1. Configuring the tuning CNI The following procedure configures the tuning CNI to change the interface-level network net.ipv4.conf.IFNAME.accept_redirects sysctl. This example enables accepting and sending ICMP-redirected packets. Procedure Create a network attachment definition, such as tuning-example.yaml , with the following content: apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: <name> 1 namespace: default 2 spec: config: '{ "cniVersion": "0.4.0", 3 "name": "<name>", 4 "plugins": [{ "type": "<main_CNI_plugin>" 5 }, { "type": "tuning", 6 "sysctl": { "net.ipv4.conf.IFNAME.accept_redirects": "1" 7 } } ] } 1 Specifies the name for the additional network attachment to create. The name must be unique within the specified namespace. 2 Specifies the namespace that the object is associated with. 3 Specifies the CNI specification version. 4 Specifies the name for the configuration. It is recommended to match the configuration name to the name value of the network attachment definition. 5 Specifies the name of the main CNI plugin to configure. 6 Specifies the name of the CNI meta plugin. 7 Specifies the sysctl to set. An example yaml file is shown here: apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: tuningnad namespace: default spec: config: '{ "cniVersion": "0.4.0", "name": "tuningnad", "plugins": [{ "type": "bridge" }, { "type": "tuning", "sysctl": { "net.ipv4.conf.IFNAME.accept_redirects": "1" } } ] }' Apply the yaml by running the following command: USD oc apply -f tuning-example.yaml Example output networkattachmentdefinition.k8.cni.cncf.io/tuningnad created Create a pod such as examplepod.yaml with the network attachment definition similar to the following: apiVersion: v1 kind: Pod metadata: name: tunepod namespace: default annotations: k8s.v1.cni.cncf.io/networks: tuningnad 1 spec: containers: - name: podexample image: centos command: ["/bin/bash", "-c", "sleep INF"] securityContext: runAsUser: 2000 2 runAsGroup: 3000 3 allowPrivilegeEscalation: false 4 capabilities: 5 drop: ["ALL"] securityContext: runAsNonRoot: true 6 seccompProfile: 7 type: RuntimeDefault 1 Specify the name of the configured NetworkAttachmentDefinition . 2 runAsUser controls which user ID the container is run with. 3 runAsGroup controls which primary group ID the containers is run with. 4 allowPrivilegeEscalation determines if a pod can request to allow privilege escalation. If unspecified, it defaults to true. This boolean directly controls whether the no_new_privs flag gets set on the container process. 5 capabilities permit privileged actions without giving full root access. This policy ensures all capabilities are dropped from the pod. 6 runAsNonRoot: true requires that the container will run with a user with any UID other than 0. 7 RuntimeDefault enables the default seccomp profile for a pod or container workload. Apply the yaml by running the following command: USD oc apply -f examplepod.yaml Verify that the pod is created by running the following command: USD oc get pod Example output NAME READY STATUS RESTARTS AGE tunepod 1/1 Running 0 47s Log in to the pod by running the following command: USD oc rsh tunepod Verify the values of the configured sysctl flags. For example, find the value net.ipv4.conf.net1.accept_redirects by running the following command: sh-4.4# sysctl net.ipv4.conf.net1.accept_redirects Expected output net.ipv4.conf.net1.accept_redirects = 1 14.2. Additional resources Using sysctls in containers
|
[
"apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: <name> 1 namespace: default 2 spec: config: '{ \"cniVersion\": \"0.4.0\", 3 \"name\": \"<name>\", 4 \"plugins\": [{ \"type\": \"<main_CNI_plugin>\" 5 }, { \"type\": \"tuning\", 6 \"sysctl\": { \"net.ipv4.conf.IFNAME.accept_redirects\": \"1\" 7 } } ] }",
"apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: tuningnad namespace: default spec: config: '{ \"cniVersion\": \"0.4.0\", \"name\": \"tuningnad\", \"plugins\": [{ \"type\": \"bridge\" }, { \"type\": \"tuning\", \"sysctl\": { \"net.ipv4.conf.IFNAME.accept_redirects\": \"1\" } } ] }'",
"oc apply -f tuning-example.yaml",
"networkattachmentdefinition.k8.cni.cncf.io/tuningnad created",
"apiVersion: v1 kind: Pod metadata: name: tunepod namespace: default annotations: k8s.v1.cni.cncf.io/networks: tuningnad 1 spec: containers: - name: podexample image: centos command: [\"/bin/bash\", \"-c\", \"sleep INF\"] securityContext: runAsUser: 2000 2 runAsGroup: 3000 3 allowPrivilegeEscalation: false 4 capabilities: 5 drop: [\"ALL\"] securityContext: runAsNonRoot: true 6 seccompProfile: 7 type: RuntimeDefault",
"oc apply -f examplepod.yaml",
"oc get pod",
"NAME READY STATUS RESTARTS AGE tunepod 1/1 Running 0 47s",
"oc rsh tunepod",
"sh-4.4# sysctl net.ipv4.conf.net1.accept_redirects",
"net.ipv4.conf.net1.accept_redirects = 1"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/networking/nodes-setting-interface-level-network-sysctls
|
Chapter 2. Eclipse Temurin features
|
Chapter 2. Eclipse Temurin features Eclipse Temurin does not contain structural changes from the upstream distribution of OpenJDK. For the list of changes and security fixes that the latest OpenJDK 21 release of Eclipse Temurin includes, see OpenJDK 21.0.6 Released . New features and enhancements Eclipse Temurin 21.0.6 includes the following new features and enhancements. Option for jar command to avoid overwriting files when extracting an archive In earlier OpenJDK releases, when the jar tool extracted files from an archive, the jar tool overwrote any existing files with the same name in the target directory. OpenJDK 21.0.6 adds a new ‐k (or ‐‐keep-old-files ) option that you can use to ensure that the jar tool does not overwrite existing files. You can specify this new option in either short or long format. For example: jar xkf myfile.jar jar --extract ‐‐keep-old-files ‐‐file myfile.jar Note In OpenJDK 21.0.6, the jar tool retains the old behavior by default. If you do not explicitly specify the ‐k (or ‐‐keep-old-files ) option, the jar tool automatically overwrites any existing files with the same name. See JDK-8335912 (JDK Bug System) and JDK bug system reference ID: JDK-8337499. IANA time zone database updated to version 2024b In OpenJDK 21.0.6, the in-tree copy of the Internet Assigned Numbers Authority (IANA) time zone database is updated to version 2024b. This update is primarily concerned with improving historical data for Mexico, Mongolia, and Portugal. This update to the IANA database also includes the following changes: Asia/Choibalsan is an alias for Asia/Ulaanbaatar . The Middle European Time (MET) time zone is equal to Central European Time (CET). Some legacy time-zone IDs are mapped to geographical names rather than fixed offsets: Eastern Standard Time (EST) is mapped to America/Panama rather than -5:00 . Mountain Standard Time (MST) is mapped to America/Phoenix rather than -7:00 . Hawaii Standard Time (HST) is mapped to Pacific/Honolulu rather than -10:00 . OpenJDK overrides the change in the legacy time-zone ID mappings by retaining the existing fixed-offset mapping. See JDK-8339637 (JDK Bug System) . Revised on 2025-01-30 11:12:40 UTC
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/21/html/release_notes_for_eclipse_temurin_21.0.6/openjdk-temurin-features-21-0-6_openjdk
|
Chapter 44. ip
|
Chapter 44. ip This chapter describes the commands under the ip command. 44.1. ip availability list List IP availability for network Usage: Table 44.1. Command arguments Value Summary -h, --help Show this help message and exit --ip-version <ip-version> List ip availability of given ip version networks (default is 4) --project <project> List ip availability of given project (name or id) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. Table 44.2. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 44.3. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 44.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 44.5. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 44.2. ip availability show Show network IP availability details Usage: Table 44.6. Positional arguments Value Summary <network> Show ip availability for a specific network (name or ID) Table 44.7. Command arguments Value Summary -h, --help Show this help message and exit Table 44.8. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 44.9. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 44.10. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 44.11. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show.
|
[
"openstack ip availability list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--ip-version <ip-version>] [--project <project>] [--project-domain <project-domain>]",
"openstack ip availability show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <network>"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/command_line_interface_reference/ip
|
Chapter 5. Securing Multicloud Object Gateway
|
Chapter 5. Securing Multicloud Object Gateway 5.1. Changing the default account credentials to ensure better security in the Multicloud Object Gateway Change and rotate your Multicloud Object Gateway (MCG) account credentials using the command-line interface to prevent issues with applications, and to ensure better account security. 5.1.1. Resetting the noobaa account password Prerequisites A running OpenShift Data Foundation cluster. Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS. Procedure To reset the noobaa account password, run the following command: Example: Example output: Important To access the admin account credentials run the noobaa status command from the terminal: 5.1.2. Regenerating the S3 credentials for the accounts Prerequisites A running OpenShift Data Foundation cluster. Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS. Procedure Get the account name. For listing the accounts, run the following command: Example output: Alternatively, run the oc get noobaaaccount command from the terminal: Example output: To regenerate the noobaa account S3 credentials, run the following command: Once you run the noobaa account regenerate command it will prompt a warning that says "This will invalidate all connections between S3 clients and NooBaa which are connected using the current credentials." , and ask for confirmation: Example: Example output: On approving, it will regenerate the credentials and eventually print them: 5.1.3. Regenerating the S3 credentials for the OBC Prerequisites A running OpenShift Data Foundation cluster. Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS. Procedure To get the OBC name, run the following command: Example output: Alternatively, run the oc get obc command from the terminal: Example output: To regenerate the noobaa OBC S3 credentials, run the following command: Once you run the noobaa obc regenerate command it will prompt a warning that says "This will invalidate all connections between the S3 clients and noobaa which are connected using the current credentials." , and ask for confirmation: Example: Example output: On approving, it will regenerate the credentials and eventually print them: 5.2. Enabling secured mode deployment for Multicloud Object Gateway You can specify a range of IP addresses that should be allowed to reach the Multicloud Object Gateway (MCG) load balancer services to enable secure mode deployment. This helps to control the IP addresses that can access the MCG services. Note You can disable the MCG load balancer usage by setting the disableLoadBalancerService variable in the storagecluster custom resource definition (CRD) while deploying OpenShift Data Foundation using the command line interface. This helps to restrict MCG from creating any public resources for private clusters and to disable the MCG service EXTERNAL-IP . For more information, see the Red Hat Knowledgebase article Install Red Hat OpenShift Data Foundation 4.X in internal mode using command line interface . For information about disabling MCG load balancer service after deploying OpenShift Data Foundation, see Disabling Multicloud Object Gateway external service after deploying OpenShift Data Foundation . Prerequisites A running OpenShift Data Foundation cluster. In case of a bare metal deployment, ensure that the load balancer controller supports setting the loadBalancerSourceRanges attribute in the Kubernetes services. Procedure Edit the NooBaa custom resource (CR) to specify the range of IP addresses that can access the MCG services after deploying OpenShift Data Foundation. noobaa The NooBaa CR type that controls the NooBaa system deployment. noobaa The name of the NooBaa CR. For example: loadBalancerSourceSubnets A new field that can be added under spec in the NooBaa CR to specify the IP addresses that should have access to the NooBaa services. In this example, all the IP addresses that are in the subnet 10.0.0.0/16 or 192.168.10.0/32 will be able to access MCG S3 and security token service (STS) while the other IP addresses are not allowed to access. Verification steps To verify if the specified IP addresses are set, in the OpenShift Web Console, run the following command and check if the output matches with the IP addresses provided to MCG:
|
[
"noobaa account passwd <noobaa_account_name> [options]",
"noobaa account passwd FATA[0000] ❌ Missing expected arguments: <noobaa_account_name> Options: --new-password='': New Password for authentication - the best practice is to omit this flag , in that case the CLI will prompt to prompt and read it securely from the terminal to avoid leaking secrets in t he shell history --old-password='': Old Password for authentication - the best practice is to omit this flag , in that case the CLI will prompt to prompt and read it securely from the terminal to avoid leaking secrets in the shell history --retype-new-password='': Retype new Password for authentication - the best practice is to omit this flag , in that case the CLI will prompt to prompt and read it securely from the terminal to avoid leaking secrets in the shell history Usage: noobaa account passwd <noobaa-account-name> [flags] [options] Use \"noobaa options\" for a list of global command-line options (applies to all commands).",
"noobaa account passwd [email protected]",
"Enter old-password: [got 24 characters] Enter new-password: [got 7 characters] Enter retype-new-password: [got 7 characters] INFO[0017] ✅ Exists: Secret \"noobaa-admin\" INFO[0017] ✅ Exists: NooBaa \"noobaa\" INFO[0017] ✅ Exists: Service \"noobaa-mgmt\" INFO[0017] ✅ Exists: Secret \"noobaa-operator\" INFO[0017] ✅ Exists: Secret \"noobaa-admin\" INFO[0017] ✈\\ufe0f RPC: account.reset_password() Request: {Email:[email protected] VerificationPassword: * Password: *} WARN[0017] RPC: GetConnection creating connection to wss://localhost:58460/rpc/ 0xc000402ae0 INFO[0017] RPC: Connecting websocket (0xc000402ae0) &{RPC:0xc000501a40 Address:wss://localhost:58460/rpc/ State:init WS:<nil> PendingRequests:map[] NextRequestID:0 Lock:{state:1 sema:0} ReconnectDelay:0s cancelPings:<nil>} INFO[0017] RPC: Connected websocket (0xc000402ae0) &{RPC:0xc000501a40 Address:wss://localhost:58460/rpc/ State:init WS:<nil> PendingRequests:map[] NextRequestID:0 Lock:{state:1 sema:0} ReconnectDelay:0s cancelPings:<nil>} INFO[0020] ✅ RPC: account.reset_password() Response OK: took 2907.1ms INFO[0020] ✅ Updated: \"noobaa-admin\" INFO[0020] ✅ Successfully reset the password for the account \"[email protected]\"",
"-------------------- - Mgmt Credentials - -------------------- email : [email protected] password : ***",
"noobaa account list",
"NAME ALLOWED_BUCKETS DEFAULT_RESOURCE PHASE AGE account-test [*] noobaa-default-backing-store Ready 14m17s test2 [first.bucket] noobaa-default-backing-store Ready 3m12s",
"oc get noobaaaccount",
"NAME PHASE AGE account-test Ready 15m test2 Ready 3m59s",
"noobaa account regenerate <noobaa_account_name> [options]",
"noobaa account regenerate FATA[0000] ❌ Missing expected arguments: <noobaa-account-name> Usage: noobaa account regenerate <noobaa-account-name> [flags] [options] Use \"noobaa options\" for a list of global command-line options (applies to all commands).",
"noobaa account regenerate account-test",
"INFO[0000] You are about to regenerate an account's security credentials. INFO[0000] This will invalidate all connections between S3 clients and NooBaa which are connected using the current credentials. INFO[0000] are you sure? y/n",
"INFO[0015] ✅ Exists: Secret \"noobaa-account-account-test\" Connection info: AWS_ACCESS_KEY_ID : *** AWS_SECRET_ACCESS_KEY : ***",
"noobaa obc list",
"NAMESPACE NAME BUCKET-NAME STORAGE-CLASS BUCKET-CLASS PHASE default obc-test obc-test-35800e50-8978-461f-b7e0-7793080e26ba default.noobaa.io noobaa-default-bucket-class Bound",
"oc get obc",
"NAME STORAGE-CLASS PHASE AGE obc-test default.noobaa.io Bound 38s",
"noobaa obc regenerate <bucket_claim_name> [options]",
"noobaa obc regenerate FATA[0000] ❌ Missing expected arguments: <bucket-claim-name> Usage: noobaa obc regenerate <bucket-claim-name> [flags] [options] Use \"noobaa options\" for a list of global command-line options (applies to all commands).",
"noobaa obc regenerate obc-test",
"INFO[0000] You are about to regenerate an OBC's security credentials. INFO[0000] This will invalidate all connections between S3 clients and NooBaa which are connected using the current credentials. INFO[0000] are you sure? y/n",
"INFO[0022] ✅ RPC: bucket.read_bucket() Response OK: took 95.4ms ObjectBucketClaim info: Phase : Bound ObjectBucketClaim : kubectl get -n default objectbucketclaim obc-test ConfigMap : kubectl get -n default configmap obc-test Secret : kubectl get -n default secret obc-test ObjectBucket : kubectl get objectbucket obc-default-obc-test StorageClass : kubectl get storageclass default.noobaa.io BucketClass : kubectl get -n default bucketclass noobaa-default-bucket-class Connection info: BUCKET_HOST : s3.default.svc BUCKET_NAME : obc-test-35800e50-8978-461f-b7e0-7793080e26ba BUCKET_PORT : 443 AWS_ACCESS_KEY_ID : *** AWS_SECRET_ACCESS_KEY : *** Shell commands: AWS S3 Alias : alias s3='AWS_ACCESS_KEY_ID=*** AWS_SECRET_ACCESS_KEY =*** aws s3 --no-verify-ssl --endpoint-url ***' Bucket status: Name : obc-test-35800e50-8978-461f-b7e0-7793080e26ba Type : REGULAR Mode : OPTIMAL ResiliencyStatus : OPTIMAL QuotaStatus : QUOTA_NOT_SET Num Objects : 0 Data Size : 0.000 B Data Size Reduced : 0.000 B Data Space Avail : 13.261 GB Num Objects Avail : 9007199254740991",
"oc edit noobaa -n openshift-storage noobaa",
"spec: loadBalancerSourceSubnets: s3: [\"10.0.0.0/16\", \"192.168.10.0/32\"] sts: - \"10.0.0.0/16\" - \"192.168.10.0/32\"",
"oc get svc -n openshift-storage <s3 | sts> -o=go-template='{{ .spec.loadBalancerSourceRanges }}'"
] |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/managing_hybrid_and_multicloud_resources/securing-multicloud-object-gateway
|
11.3. Configuring a Kerberos Client
|
11.3. Configuring a Kerberos Client All that is required to set up a Kerberos 5 client is to install the client packages and provide each client with a valid krb5.conf configuration file. While ssh and slogin are the preferred methods of remotely logging in to client systems, Kerberos-aware versions of rsh and rlogin are still available, with additional configuration changes. Install the krb5-libs and krb5-workstation packages on all of the client machines. Supply a valid /etc/krb5.conf file for each client. Usually this can be the same krb5.conf file used by the Kerberos Distribution Center (KDC). For example: In some environments, the KDC is only accessible using an HTTPS Kerberos Key Distribution Center Proxy (KKDCP). In this case, make the following changes: Assign the URL of the KKDCP instead of the host name to the kdc and admin_server options in the [realms] section: For redundancy, the parameters kdc , admin_server , and kpasswd_server can be added multiple times using different KKDCP servers. On IdM clients, restart the sssd service to make the changes take effect: To use Kerberos-aware rsh and rlogin services, install the rsh package. Before a workstation can use Kerberos to authenticate users who connect using ssh , rsh , or rlogin , it must have its own host principal in the Kerberos database. The sshd , kshd , and klogind server programs all need access to the keys for the host service's principal. Using kadmin , add a host principal for the workstation on the KDC. The instance in this case is the host name of the workstation. Use the -randkey option for the kadmin 's addprinc command to create the principal and assign it a random key: The keys can be extracted for the workstation by running kadmin on the workstation itself and using the ktadd command. To use other Kerberos-aware network services, install the krb5-server package and start the services. The Kerberos-aware services are listed in Table 11.3, "Common Kerberos-aware Services" . Table 11.3. Common Kerberos-aware Services Service Name Usage Information ssh OpenSSH uses GSS-API to authenticate users to servers if the client's and server's configuration both have GSSAPIAuthentication enabled. If the client also has GSSAPIDelegateCredentials enabled, the user's credentials are made available on the remote system. OpenSSH also contains the sftp tool, which provides an FTP-like interface to SFTP servers and can use GSS-API. IMAP The cyrus-imap package uses Kerberos 5 if it also has the cyrus-sasl-gssapi package installed. The cyrus-sasl-gssapi package contains the Cyrus SASL plugins which support GSS-API authentication. Cyrus IMAP functions properly with Kerberos as long as the cyrus user is able to find the proper key in /etc/krb5.keytab , and the root for the principal is set to imap (created with kadmin ). An alternative to cyrus-imap can be found in the dovecot package, which is also included in Red Hat Enterprise Linux. This package contains an IMAP server but does not, to date, support GSS-API and Kerberos.
|
[
"yum install krb5-workstation krb5-libs",
"[logging] default = FILE:/var/log/krb5libs.log kdc = FILE:/var/log/krb5kdc.log admin_server = FILE:/var/log/kadmind.log [libdefaults] default_realm = EXAMPLE.COM dns_lookup_realm = false dns_lookup_kdc = false ticket_lifetime = 24h renew_lifetime = 7d forwardable = true allow_weak_crypto = true [realms] EXAMPLE.COM = { kdc = kdc.example.com.:88 admin_server = kdc.example.com default_domain = example.com } [domain_realm] .example.com = EXAMPLE.COM example.com = EXAMPLE.COM",
"[realms] EXAMPLE.COM = { kdc = https://kdc.example.com/KdcProxy admin_server = https://kdc.example.com/KdcProxy kpasswd_server = https://kdc.example.com/KdcProxy default_domain = example.com }",
"systemctl restart sssd",
"addprinc -randkey host/server.example.com",
"ktadd -k /etc/krb5.keytab host/server.example.com"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/system-level_authentication_guide/configuring_a_kerberos_5_client
|
Chapter 8. Upgrading Dev Spaces
|
Chapter 8. Upgrading Dev Spaces This chapter describes how to upgrade from CodeReady Workspaces 3.1 to OpenShift Dev Spaces 3.19. 8.1. Upgrading the chectl management tool This section describes how to upgrade the dsc management tool. Procedure Section 2.2, "Installing the dsc management tool" . 8.2. Specifying the update approval strategy The Red Hat OpenShift Dev Spaces Operator supports two upgrade strategies: Automatic The Operator installs new updates when they become available. Manual New updates need to be manually approved before installation begins. You can specify the update approval strategy for the Red Hat OpenShift Dev Spaces Operator by using the OpenShift web console. Prerequisites An OpenShift web console session by a cluster administrator. See Accessing the web console . An instance of OpenShift Dev Spaces that was installed by using Red Hat Ecosystem Catalog. Procedure In the OpenShift web console, navigate to Operators Installed Operators . Click Red Hat OpenShift Dev Spaces in the list of installed Operators. Navigate to the Subscription tab. Configure the Update approval strategy to Automatic or Manual . Additional resources Changing the update channel for an Operator 8.3. Upgrading Dev Spaces using the OpenShift web console You can manually approve an upgrade from an earlier minor version using the Red Hat OpenShift Dev Spaces Operator from the Red Hat Ecosystem Catalog in the OpenShift web console. Prerequisites An OpenShift web console session by a cluster administrator. See Accessing the web console . An instance of OpenShift Dev Spaces that was installed by using the Red Hat Ecosystem Catalog. The approval strategy in the subscription is Manual . See Section 8.2, "Specifying the update approval strategy" . Procedure Manually approve the pending Red Hat OpenShift Dev Spaces Operator upgrade. See Manually approving a pending Operator upgrade . Verification steps Navigate to the OpenShift Dev Spaces instance. The 3.19 version number is visible at the bottom of the page. Additional resources Manually approving a pending Operator upgrade 8.4. Upgrading Dev Spaces using the CLI management tool This section describes how to upgrade from the minor version using the CLI management tool. Prerequisites An administrative account on OpenShift. A running instance of a minor version of CodeReady Workspaces, installed using the CLI management tool on the same instance of OpenShift, in the openshift-devspaces OpenShift project. dsc for OpenShift Dev Spaces version 3.19. See: Section 2.2, "Installing the dsc management tool" . Procedure Save and push changes back to the Git repositories for all running CodeReady Workspaces 3.1 workspaces. Shut down all workspaces in the CodeReady Workspaces 3.1 instance. Upgrade OpenShift Dev Spaces: Note For slow systems or internet connections, add the --k8spodwaittimeout=1800000 flag option to extend the Pod timeout period to 1800000 ms or longer. Verification steps Navigate to the OpenShift Dev Spaces instance. The 3.19 version number is visible at the bottom of the page. 8.5. Upgrading Dev Spaces in a restricted environment This section describes how to upgrade Red Hat OpenShift Dev Spaces and perform minor version updates by using the CLI management tool in a restricted environment. Prerequisites The OpenShift Dev Spaces instance was installed on OpenShift using the dsc --installer operator method in the openshift-devspaces project. See Section 3.1.4, "Installing Dev Spaces in a restricted environment" . The OpenShift cluster has at least 64 GB of disk space. The OpenShift cluster is ready to operate on a restricted network. See About disconnected installation mirroring and Using Operator Lifecycle Manager on restricted networks . An active oc session with administrative permissions to the OpenShift cluster. See Getting started with the OpenShift CLI . An active oc registry session to the registry.redhat.io Red Hat Ecosystem Catalog. See: Red Hat Container Registry authentication . opm . See Installing the opm CLI . jq . See Downloading jq . podman . See Podman Installation Instructions . skopeo version 1.6 or higher. See Installing Skopeo . An active skopeo session with administrative access to the private Docker registry. Authenticating to a registry , and Mirroring images for a disconnected installation . dsc for OpenShift Dev Spaces version 3.19. See Section 2.2, "Installing the dsc management tool" . Procedure Download and execute the mirroring script to install a custom Operator catalog and mirror the related images: prepare-restricted-environment.sh . 1 The private Docker registry where the images will be mirrored In all running workspaces in the CodeReady Workspaces 3.1 instance, save and push changes back to the Git repositories. Stop all workspaces in the CodeReady Workspaces 3.1 instance. Run the following command: Verification steps Navigate to the OpenShift Dev Spaces instance. The 3.19 version number is visible at the bottom of the page. Additional resources Red Hat-provided Operator catalogs Managing custom catalogs 8.6. Repairing the Dev Workspace Operator on OpenShift Under certain conditions, such as OLM restart or cluster upgrade, the Dev Spaces Operator for OpenShift Dev Spaces might automatically install the Dev Workspace Operator even when it is already present on the cluster. In that case, you can repair the Dev Workspace Operator on OpenShift as follows: Prerequisites An active oc session as a cluster administrator to the destination OpenShift cluster. See Getting started with the CLI . On the Installed Operators page of the OpenShift web console, you see multiple entries for the Dev Workspace Operator or one entry that is stuck in a loop of Replacing and Pending . Procedure Delete the devworkspace-controller namespace that contains the failing pod. Update DevWorkspace and DevWorkspaceTemplate Custom Resource Definitions (CRD) by setting the conversion strategy to None and removing the entire webhook section: spec: ... conversion: strategy: None status: ... Tip You can find and edit the DevWorkspace and DevWorkspaceTemplate CRDs in the Administrator perspective of the OpenShift web console by searching for DevWorkspace in Administration CustomResourceDefinitions . Note The DevWorkspaceOperatorConfig and DevWorkspaceRouting CRDs have the conversion strategy set to None by default. Remove the Dev Workspace Operator subscription: 1 openshift-operators or an OpenShift project where the Dev Workspace Operator is installed. Get the Dev Workspace Operator CSVs in the <devworkspace_operator.vX.Y.Z> format: Remove each Dev Workspace Operator CSV: 1 openshift-operators or an OpenShift project where the Dev Workspace Operator is installed. Re-create the Dev Workspace Operator subscription: 1 Automatic or Manual . Important For installPlanApproval: Manual , in the Administrator perspective of the OpenShift web console, go to Operators Installed Operators and select the following for the Dev Workspace Operator : Upgrade available Preview InstallPlan Approve . In the Administrator perspective of the OpenShift web console, go to Operators Installed Operators and verify the Succeeded status of the Dev Workspace Operator .
|
[
"dsc server:update -n openshift-devspaces",
"bash prepare-restricted-environment.sh --devworkspace_operator_index registry.redhat.io/redhat/redhat-operator-index:v4.17 --devworkspace_operator_version \"v0.32.0\" --prod_operator_index \"registry.redhat.io/redhat/redhat-operator-index:v4.17\" --prod_operator_package_name \"devspaces\" --prod_operator_bundle_name \"devspacesoperator\" --prod_operator_version \"v3.19.0\" --my_registry \" <my_registry> \" 1",
"dsc server:update --che-operator-image=\"USDTAG\" -n openshift-devspaces --k8spodwaittimeout=1800000",
"spec: conversion: strategy: None status:",
"oc delete sub devworkspace-operator -n openshift-operators 1",
"oc get csv | grep devworkspace",
"oc delete csv <devworkspace_operator.vX.Y.Z> -n openshift-operators 1",
"cat <<EOF | oc apply -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: devworkspace-operator namespace: openshift-operators spec: channel: fast name: devworkspace-operator source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Automatic 1 startingCSV: devworkspace-operator.v0.32.0 EOF"
] |
https://docs.redhat.com/en/documentation/red_hat_openshift_dev_spaces/3.19/html/administration_guide/upgrading-devspaces
|
Common object reference
|
Common object reference OpenShift Container Platform 4.13 Reference guide common API objects Red Hat OpenShift Documentation Team
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/common_object_reference/index
|
Chapter 47. Red Hat Enterprise Linux System Roles Powered by Ansible
|
Chapter 47. Red Hat Enterprise Linux System Roles Powered by Ansible Red Hat Enterprise Linux System Roles Red Hat Enterprise Linux System Roles, available as a Technology Preview, is a configuration interface for Red Hat Enterprise Linux subsystems, which makes system configuration easier through the inclusion of Ansible Roles . This interface enables managing system configurations across multiple versions of Red Hat Enterprise Linux, as well as adopting new major releases. Since Red Hat Enterprise Linux 7.4, the Red Hat Enterprise Linux System Roles packages have been distributed through the Extras channel. For details regarding Red Hat Enterprise Linux System Roles, see https://access.redhat.com/articles/3050101 . (BZ# 1313263 )
| null |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.5_release_notes/technology_previews_red_hat_enterprise_linux_system_roles_powered_by_ansible
|
Appendix A. Revision History
|
Appendix A. Revision History Revision History Revision 0.0-28 Mon Oct 30 2017 Lenka Spackova Added information on changes in the ld linker behavior to Deprecated Functionality. Revision 0.0-27 Tue Feb 14 2017 Lenka Spackova Updated the Samba 4.1.0 description (Authentication and Interoperability). Revision 0.0-25 Mon Aug 08 2016 Lenka Spackova Added a note about limited support for Windows guest virtual machines to Deprecated Functionality. Revision 0.0-23 Fri Mar 13 2015 Milan Navratil Update of the Red Hat Enterprise Linux 7.0 Release Notes. Revision 0.0-21 Wed Jan 06 2015 Jiri Herrmann Update of the Red Hat Enterprise Linux 7.0 Release Notes. Revision 0.0-1 Thu Dec 11 2013 Eliska Slobodova Release of the Red Hat Enterprise Linux 7.0 Beta Release Notes.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.0_release_notes/appe-red_hat_enterprise_linux-7.0_release_notes-revision_history
|
Embedding Data Grid in Java Applications
|
Embedding Data Grid in Java Applications Red Hat Data Grid 8.4 Create embedded caches with Data Grid Red Hat Customer Content Services
| null |
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/embedding_data_grid_in_java_applications/index
|
Chapter 14. Using bound service account tokens
|
Chapter 14. Using bound service account tokens You can use bound service account tokens, which improves the ability to integrate with cloud provider identity access management (IAM) services, such as OpenShift Container Platform on AWS IAM or Google Cloud Platform IAM. 14.1. About bound service account tokens You can use bound service account tokens to limit the scope of permissions for a given service account token. These tokens are audience and time-bound. This facilitates the authentication of a service account to an IAM role and the generation of temporary credentials mounted to a pod. You can request bound service account tokens by using volume projection and the TokenRequest API. 14.2. Configuring bound service account tokens using volume projection You can configure pods to request bound service account tokens by using volume projection. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have created a service account. This procedure assumes that the service account is named build-robot . Procedure Optional: Set the service account issuer. This step is typically not required if the bound tokens are used only within the cluster. Important If you change the service account issuer to a custom one, the service account issuer is still trusted for the 24 hours. You can force all holders to request a new bound token either by manually restarting all pods in the cluster or by performing a rolling node restart. Before performing either action, wait for a new revision of the Kubernetes API server pods to roll out with your service account issuer changes. Edit the cluster Authentication object: USD oc edit authentications cluster Set the spec.serviceAccountIssuer field to the desired service account issuer value: spec: serviceAccountIssuer: https://test.default.svc 1 1 This value should be a URL from which the recipient of a bound token can source the public keys necessary to verify the signature of the token. The default is https://kubernetes.default.svc . Save the file to apply the changes. Wait for a new revision of the Kubernetes API server pods to roll out. It can take several minutes for all nodes to update to the new revision. Run the following command: USD oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}' Review the NodeInstallerProgressing status condition for the Kubernetes API server to verify that all nodes are at the latest revision. The output shows AllNodesAtLatestRevision upon successful update: AllNodesAtLatestRevision 3 nodes are at revision 12 1 1 In this example, the latest revision number is 12 . If the output shows a message similar to one of the following messages, the update is still in progress. Wait a few minutes and try again. 3 nodes are at revision 11; 0 nodes have achieved new revision 12 2 nodes are at revision 11; 1 nodes are at revision 12 Optional: Force the holder to request a new bound token either by performing a rolling node restart or by manually restarting all pods in the cluster. Perform a rolling node restart: Warning It is not recommended to perform a rolling node restart if you have custom workloads running on your cluster, because it can cause a service interruption. Instead, manually restart all pods in the cluster. Restart nodes sequentially. Wait for the node to become fully available before restarting the node. See Rebooting a node gracefully for instructions on how to drain, restart, and mark a node as schedulable again. Manually restart all pods in the cluster: Warning Be aware that running this command causes a service interruption, because it deletes every running pod in every namespace. These pods will automatically restart after they are deleted. Run the following command: USD for I in USD(oc get ns -o jsonpath='{range .items[*]} {.metadata.name}{"\n"} {end}'); \ do oc delete pods --all -n USDI; \ sleep 1; \ done Configure a pod to use a bound service account token by using volume projection. Create a file called pod-projected-svc-token.yaml with the following contents: apiVersion: v1 kind: Pod metadata: name: nginx spec: securityContext: runAsNonRoot: true 1 seccompProfile: type: RuntimeDefault 2 containers: - image: nginx name: nginx volumeMounts: - mountPath: /var/run/secrets/tokens name: vault-token securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] serviceAccountName: build-robot 3 volumes: - name: vault-token projected: sources: - serviceAccountToken: path: vault-token 4 expirationSeconds: 7200 5 audience: vault 6 1 Prevents containers from running as root to minimize compromise risks. 2 Sets the default seccomp profile, limiting to essential system calls, to reduce risks. 3 A reference to an existing service account. 4 The path relative to the mount point of the file to project the token into. 5 Optionally set the expiration of the service account token, in seconds. The default value is 3600 seconds (1 hour), and this value must be at least 600 seconds (10 minutes). The kubelet starts trying to rotate the token if the token is older than 80 percent of its time to live or if the token is older than 24 hours. 6 Optionally set the intended audience of the token. The recipient of a token should verify that the recipient identity matches the audience claim of the token, and should otherwise reject the token. The audience defaults to the identifier of the API server. Note In order to prevent unexpected failure, OpenShift Container Platform overrides the expirationSeconds value to be one year from the initial token generation with the --service-account-extend-token-expiration default of true . You cannot change this setting. Create the pod: USD oc create -f pod-projected-svc-token.yaml The kubelet requests and stores the token on behalf of the pod, makes the token available to the pod at a configurable file path, and refreshes the token as it approaches expiration. The application that uses the bound token must handle reloading the token when it rotates. The kubelet rotates the token if it is older than 80 percent of its time to live, or if the token is older than 24 hours. 14.3. Creating bound service account tokens outside the pod Prerequisites You have created a service account. This procedure assumes that the service account is named build-robot . Procedure Create the bound service account token outside the pod by running the following command: USD oc create token build-robot Example output eyJhbGciOiJSUzI1NiIsImtpZCI6IkY2M1N4MHRvc2xFNnFSQlA4eG9GYzVPdnN3NkhIV0tRWmFrUDRNcWx4S0kifQ.eyJhdWQiOlsiaHR0cHM6Ly9pc3N1ZXIyLnRlc3QuY29tIiwiaHR0cHM6Ly9pc3N1ZXIxLnRlc3QuY29tIiwiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjIl0sImV4cCI6MTY3OTU0MzgzMCwiaWF0IjoxNjc5NTQwMjMwLCJpc3MiOiJodHRwczovL2lzc3VlcjIudGVzdC5jb20iLCJrdWJlcm5ldGVzLmlvIjp7Im5hbWVzcGFjZSI6ImRlZmF1bHQiLCJzZXJ2aWNlYWNjb3VudCI6eyJuYW1lIjoidGVzdC1zYSIsInVpZCI6ImM3ZjA4MjkwLWIzOTUtNGM4NC04NjI4LTMzMTM1NTVhNWY1OSJ9fSwibmJmIjoxNjc5NTQwMjMwLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6ZGVmYXVsdDp0ZXN0LXNhIn0.WyAOPvh1BFMUl3LNhBCrQeaB5wSynbnCfojWuNNPSilT4YvFnKibxwREwmzHpV4LO1xOFZHSi6bXBOmG_o-m0XNDYL3FrGHd65mymiFyluztxa2lgHVxjw5reIV5ZLgNSol3Y8bJqQqmNg3rtQQWRML2kpJBXdDHNww0E5XOypmffYkfkadli8lN5QQD-MhsCbiAF8waCYs8bj6V6Y7uUKTcxee8sCjiRMVtXKjQtooERKm-CH_p57wxCljIBeM89VdaR51NJGued4hVV5lxvVrYZFu89lBEAq4oyQN_d6N1vBWGXQMyoihnt_fQjn-NfnlJWk-3NSZDIluDJAv7e-MTEk3geDrHVQKNEzDei2-Un64hSzb-n1g1M0Vn0885wQBQAePC9UlZm8YZlMNk1tq6wIUKQTMv3HPfi5HtBRqVc2eVs0EfMX4-x-PHhPCasJ6qLJWyj6DvyQ08dP4DW_TWZVGvKlmId0hzwpg59TTcLR0iCklSEJgAVEEd13Aa_M0-faD11L3MhUGxw0qxgOsPczdXUsolSISbefs7OKymzFSIkTAn9sDQ8PHMOsuyxsK8vzfrR-E0z7MAeguZ2kaIY7cZqbN6WFy0caWgx46hrKem9vCKALefElRYbCg3hcBmowBcRTOqaFHLNnHghhU1LaRpoFzH7OUarqX9SGQ Additional resources Rebooting a node gracefully Creating service accounts
|
[
"oc edit authentications cluster",
"spec: serviceAccountIssuer: https://test.default.svc 1",
"oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"AllNodesAtLatestRevision 3 nodes are at revision 12 1",
"for I in USD(oc get ns -o jsonpath='{range .items[*]} {.metadata.name}{\"\\n\"} {end}'); do oc delete pods --all -n USDI; sleep 1; done",
"apiVersion: v1 kind: Pod metadata: name: nginx spec: securityContext: runAsNonRoot: true 1 seccompProfile: type: RuntimeDefault 2 containers: - image: nginx name: nginx volumeMounts: - mountPath: /var/run/secrets/tokens name: vault-token securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] serviceAccountName: build-robot 3 volumes: - name: vault-token projected: sources: - serviceAccountToken: path: vault-token 4 expirationSeconds: 7200 5 audience: vault 6",
"oc create -f pod-projected-svc-token.yaml",
"oc create token build-robot",
"eyJhbGciOiJSUzI1NiIsImtpZCI6IkY2M1N4MHRvc2xFNnFSQlA4eG9GYzVPdnN3NkhIV0tRWmFrUDRNcWx4S0kifQ.eyJhdWQiOlsiaHR0cHM6Ly9pc3N1ZXIyLnRlc3QuY29tIiwiaHR0cHM6Ly9pc3N1ZXIxLnRlc3QuY29tIiwiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjIl0sImV4cCI6MTY3OTU0MzgzMCwiaWF0IjoxNjc5NTQwMjMwLCJpc3MiOiJodHRwczovL2lzc3VlcjIudGVzdC5jb20iLCJrdWJlcm5ldGVzLmlvIjp7Im5hbWVzcGFjZSI6ImRlZmF1bHQiLCJzZXJ2aWNlYWNjb3VudCI6eyJuYW1lIjoidGVzdC1zYSIsInVpZCI6ImM3ZjA4MjkwLWIzOTUtNGM4NC04NjI4LTMzMTM1NTVhNWY1OSJ9fSwibmJmIjoxNjc5NTQwMjMwLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6ZGVmYXVsdDp0ZXN0LXNhIn0.WyAOPvh1BFMUl3LNhBCrQeaB5wSynbnCfojWuNNPSilT4YvFnKibxwREwmzHpV4LO1xOFZHSi6bXBOmG_o-m0XNDYL3FrGHd65mymiFyluztxa2lgHVxjw5reIV5ZLgNSol3Y8bJqQqmNg3rtQQWRML2kpJBXdDHNww0E5XOypmffYkfkadli8lN5QQD-MhsCbiAF8waCYs8bj6V6Y7uUKTcxee8sCjiRMVtXKjQtooERKm-CH_p57wxCljIBeM89VdaR51NJGued4hVV5lxvVrYZFu89lBEAq4oyQN_d6N1vBWGXQMyoihnt_fQjn-NfnlJWk-3NSZDIluDJAv7e-MTEk3geDrHVQKNEzDei2-Un64hSzb-n1g1M0Vn0885wQBQAePC9UlZm8YZlMNk1tq6wIUKQTMv3HPfi5HtBRqVc2eVs0EfMX4-x-PHhPCasJ6qLJWyj6DvyQ08dP4DW_TWZVGvKlmId0hzwpg59TTcLR0iCklSEJgAVEEd13Aa_M0-faD11L3MhUGxw0qxgOsPczdXUsolSISbefs7OKymzFSIkTAn9sDQ8PHMOsuyxsK8vzfrR-E0z7MAeguZ2kaIY7cZqbN6WFy0caWgx46hrKem9vCKALefElRYbCg3hcBmowBcRTOqaFHLNnHghhU1LaRpoFzH7OUarqX9SGQ"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/authentication_and_authorization/bound-service-account-tokens
|
9.5. Designing an Account Lockout Policy
|
9.5. Designing an Account Lockout Policy An account lockout policy can protect both directory data and user passwords by preventing unauthorized or compromised access to the directory. After an account has been locked, or deactivated , that user cannot bind to the directory, and any authentication operation fails. Account deactivation is implemented through the operational attribute nsAccountLock . When an entry contains the nsAccountLock attribute with a value of true , the server rejects a bind attempt by that account. An account lockout policy can be defined based on specific, automatic criteria: An account lockout policy can be associated with the password policy ( Section 9.6, "Designing a Password Policy" ). When a user fails to log in with the proper credentials after a specified number of times, the account is locked until an administrator manually unlocks it. This protects against crackers who try to break into the directory by repeatedly trying to guess a user's password. An account can be locked after a certain amount of time has lapsed. This can be used to control access for temporary users - such as interns, students, or seasonal workers - who have time-limited access based on the time the account was created. Alternatively, an account policy can be created that inactivates user accounts if the account has been inactive for a certain amount of time since the last login time. A time-based account lockout policy is defined through the Account Policy Plug-in, which sets global settings for the directory. Multiple account policy subentries can be created for different expiration times and types and then applied to entries through classes of service. Additionally, a single user account or a set of accounts (through roles) can be deactivated manually. Note Deactivating a role deactivates all of the members of that role and not the role entry itself. For more information about roles, see Section 4.3.2, "About Roles" .
| null |
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/deployment_guide/Designing_an_Account_Lockout_Policy
|
10.5.46. AddIcon
|
10.5.46. AddIcon AddIcon specifies which icon to show in server generated directory listings for files with certain extensions. For example, the Web server is set to show the icon binary.gif for files with .bin or .exe extensions.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-apache-addicon
|
Chapter 8. Configuring the systems and running tests using RHCert CLI Tool
|
Chapter 8. Configuring the systems and running tests using RHCert CLI Tool Cockpit is the preferred method to configure systems and run tests. However, RHCert CLI will be continued as an alternative for performing the same tasks. 8.1. Using the test plan to provision the Controller and Compute nodes for testing Provisioning the Controller and Compute nodes through the test host performs several operations, such as installing the required packages on the two nodes based on the certification type and creating a final test plan to run. The final test plan is generated based on the test roles defined for each node and has a list of common tests taken from both the test plan provided by Red Hat and tests generated on discovering the system requirements. For instance, required OpenStack packages will be installed if the test plan is designed for certifying an OpenStack plugin. Prerequisites You have the IP address of both the Controller and Compute nodes. You have downloaded the test plan to the test host. Procedure Log into the test host using CLI. Provision the Controller and Compute nodes from the test host. Replace <path_to_test_plan_document> with the test plan file saved on the test host. Example: In addition to starting the Controller and Compute nodes and sending the test plan to both the nodes where the tests actually run, the command also establishes communication between the test host and each node. Select the RHOSP administrator account when prompted. Note Enter "tripleo-admin" if you use RHOSP 17.1 or later. Enter "heat-admin" if you use RHOSP 17 or earlier. Enter "root" if you have configured root as the ssh user for Controller and Compute nodes. Select None for "What is this host's role" when prompted. Tests applicable for each node will be displayed. 8.2. Running the certification tests using CLI Note The tests run in the foreground on the Controller node, they are interactive and will prompt you for inputs, whereas the tests run in the background on the Compute node and are non-interactive. Procedure Run tests Example: Select the RHOSP administrator account when prompted. Note Enter "tripleo-admin" if you use RHOSP 17.1 or later. Enter "heat-admin" if you use RHOSP 17 or earlier. Enter "root" if you have configured root as the ssh user for Controller and Compute nodes. When prompted, choose whether to run each test by typing yes or no . You can also run particular tests from the list by typing select . Separate test results from each node are transferred to the test host, where they are merged into a single result file. By default, the result file is saved as /var/rhcert/save/rhcert-multi-openstack-<certification ID>-<timestamp>.xml .
|
[
"rhcert-provision <path_to_test_plan_document> --host controller:<IP address of the controller> --host compute:<IP address of the compute>",
"rhcert-provision rhosp_test_plan.xml --host controller:192.168.24.23 --host compute:192.168.24.32",
"rhcert-run --host controller:<_IP address of the controller_> --host compute:<_IP address of the compute_>",
"rhcert-run --host controller:192.168.24.23 --host compute:192.168.24.32"
] |
https://docs.redhat.com/en/documentation/red_hat_software_certification/2025/html/red_hat_openstack_certification_workflow_guide/assembly_rhosp-wf-configuring-the-systems-and-running-tests-using-cli_rhosp-wf-configure-systems-run-tests-use-cockpit
|
Part VII. Analytics
|
Part VII. Analytics
| null |
https://docs.redhat.com/en/documentation/red_hat_3scale_api_management/2.15/html/admin_portal_guide/analytics
|
Chapter 6. Verifying HCI configuration
|
Chapter 6. Verifying HCI configuration After deployment is complete, verify the HCI environment is properly configured. 6.1. Verifying HCI configuration After the deployment of the HCI environment, verify that the deployment was successful with the configuration specified. Procedure Start a ceph shell. Confirm NUMA and memory target configuration: Confirm specific OSD configuration: Confirm specific OSD backfill configuration: Confirm the reserved_host_memory_mb configuration on the Compute node.
|
[
"ceph config dump | grep numa osd advanced osd_numa_auto_affinity true ceph config dump | grep autotune osd advanced osd_memory_target_autotune true ceph config get mgr mgr/cephadm/autotune_memory_target_ratio 0.200000",
"ceph config get osd.11 osd_memory_target 4294967296 ceph config get osd.11 osd_memory_target_autotune true ceph config get osd.11 osd_numa_auto_affinity true",
"ceph config get osd.11 osd_recovery_op_priority 3 ceph config get osd.11 osd_max_backfills 1 ceph config get osd.11 osd_recovery_max_active_hdd 3 ceph config get osd.11 osd_recovery_max_active_ssd 10",
"sudo podman exec -ti nova_compute /bin/bash bash-5.1USD grep reserved_host_memory_mb /etc/nova/nova.conf"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/hyperconverged_infrastructure_guide/assembly_verify-hci-configuration
|
Chapter 7. Installing on Azure Stack Hub
|
Chapter 7. Installing on Azure Stack Hub 7.1. Preparing to install on Azure Stack Hub 7.1.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You have installed Azure Stack Hub version 2008 or later. 7.1.2. Requirements for installing OpenShift Container Platform on Azure Stack Hub Before installing OpenShift Container Platform on Microsoft Azure Stack Hub, you must configure an Azure account. See Configuring an Azure Stack Hub account for details about account configuration, account limits, DNS zone configuration, required roles, and creating service principals. 7.1.3. Choosing a method to install OpenShift Container Platform on Azure Stack Hub You can install OpenShift Container Platform on installer-provisioned or user-provisioned infrastructure. The default installation type uses installer-provisioned infrastructure, where the installation program provisions the underlying infrastructure for the cluster. You can also install OpenShift Container Platform on infrastructure that you provision. If you do not use infrastructure that the installation program provisions, you must manage and maintain the cluster resources yourself. See Installation process for more information about installer-provisioned and user-provisioned installation processes. 7.1.3.1. Installing a cluster on installer-provisioned infrastructure You can install a cluster on Azure Stack Hub infrastructure that is provisioned by the OpenShift Container Platform installation program, by using the following method: Installing a cluster on Azure Stack Hub with an installer-provisioned infrastructure : You can install OpenShift Container Platform on Azure Stack Hub infrastructure that is provisioned by the OpenShift Container Platform installation program. 7.1.3.2. Installing a cluster on user-provisioned infrastructure You can install a cluster on Azure Stack Hub infrastructure that you provision, by using the following method: Installing a cluster on Azure Stack Hub using ARM templates : You can install OpenShift Container Platform on Azure Stack Hub by using infrastructure that you provide. You can use the provided Azure Resource Manager (ARM) templates to assist with an installation. 7.1.4. steps Configuring an Azure Stack Hub account 7.2. Configuring an Azure Stack Hub account Before you can install OpenShift Container Platform, you must configure a Microsoft Azure account. Important All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. 7.2.1. Azure Stack Hub account limits The OpenShift Container Platform cluster uses a number of Microsoft Azure Stack Hub components, and the default Quota types in Azure Stack Hub affect your ability to install OpenShift Container Platform clusters. The following table summarizes the Azure Stack Hub components whose limits can impact your ability to install and run OpenShift Container Platform clusters. Component Number of components required by default Description vCPU 56 A default cluster requires 56 vCPUs, so you must increase the account limit. By default, each cluster creates the following instances: One bootstrap machine, which is removed after installation Three control plane machines Three compute machines Because the bootstrap, control plane, and worker machines use Standard_DS4_v2 virtual machines, which use 8 vCPUs, a default cluster requires 56 vCPUs. The bootstrap node VM is used only during installation. To deploy more worker nodes, enable autoscaling, deploy large workloads, or use a different instance type, you must further increase the vCPU limit for your account to ensure that your cluster can deploy the machines that you require. VNet 1 Each default cluster requires one Virtual Network (VNet), which contains two subnets. Network interfaces 7 Each default cluster requires seven network interfaces. If you create more machines or your deployed workloads create load balancers, your cluster uses more network interfaces. Network security groups 2 Each cluster creates network security groups for each subnet in the VNet. The default cluster creates network security groups for the control plane and for the compute node subnets: controlplane Allows the control plane machines to be reached on port 6443 from anywhere node Allows worker nodes to be reached from the internet on ports 80 and 443 Network load balancers 3 Each cluster creates the following load balancers : default Public IP address that load balances requests to ports 80 and 443 across worker machines internal Private IP address that load balances requests to ports 6443 and 22623 across control plane machines external Public IP address that load balances requests to port 6443 across control plane machines If your applications create more Kubernetes LoadBalancer service objects, your cluster uses more load balancers. Public IP addresses 2 The public load balancer uses a public IP address. The bootstrap machine also uses a public IP address so that you can SSH into the machine to troubleshoot issues during installation. The IP address for the bootstrap node is used only during installation. Private IP addresses 7 The internal load balancer, each of the three control plane machines, and each of the three worker machines each use a private IP address. Additional resources Optimizing storage . 7.2.2. Configuring a DNS zone in Azure Stack Hub To successfully install OpenShift Container Platform on Azure Stack Hub, you must create DNS records in an Azure Stack Hub DNS zone. The DNS zone must be authoritative for the domain. To delegate a registrar's DNS zone to Azure Stack Hub, see Microsoft's documentation for Azure Stack Hub datacenter DNS integration . 7.2.3. Required Azure Stack Hub roles Your Microsoft Azure Stack Hub account must have the following roles for the subscription that you use: Owner To set roles on the Azure portal, see the Manage access to resources in Azure Stack Hub with role-based access control in the Microsoft documentation. 7.2.4. Creating a service principal Because OpenShift Container Platform and its installation program create Microsoft Azure resources by using the Azure Resource Manager, you must create a service principal to represent it. Prerequisites Install or update the Azure CLI . Your Azure account has the required roles for the subscription that you use. Procedure Register your environment: USD az cloud register -n AzureStackCloud --endpoint-resource-manager <endpoint> 1 1 Specify the Azure Resource Manager endpoint, `https://management.<region>.<fqdn>/`. See the Microsoft documentation for details. Set the active environment: USD az cloud set -n AzureStackCloud Update your environment configuration to use the specific API version for Azure Stack Hub: USD az cloud update --profile 2019-03-01-hybrid Log in to the Azure CLI: USD az login If you are in a multitenant environment, you must also supply the tenant ID. If your Azure account uses subscriptions, ensure that you are using the right subscription: View the list of available accounts and record the tenantId value for the subscription you want to use for your cluster: USD az account list --refresh Example output [ { "cloudName": AzureStackCloud", "id": "9bab1460-96d5-40b3-a78e-17b15e978a80", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee", "user": { "name": "[email protected]", "type": "user" } } ] View your active account details and confirm that the tenantId value matches the subscription you want to use: USD az account show Example output { "environmentName": AzureStackCloud", "id": "9bab1460-96d5-40b3-a78e-17b15e978a80", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee", 1 "user": { "name": "[email protected]", "type": "user" } } 1 Ensure that the value of the tenantId parameter is the correct subscription ID. If you are not using the right subscription, change the active subscription: USD az account set -s <subscription_id> 1 1 Specify the subscription ID. Verify the subscription ID update: USD az account show Example output { "environmentName": AzureStackCloud", "id": "33212d16-bdf6-45cb-b038-f6565b61edda", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "8049c7e9-c3de-762d-a54e-dc3f6be6a7ee", "user": { "name": "[email protected]", "type": "user" } } Record the tenantId and id parameter values from the output. You need these values during the OpenShift Container Platform installation. Create the service principal for your account: USD az ad sp create-for-rbac --role Contributor --name <service_principal> \ 1 --scopes /subscriptions/<subscription_id> 2 --years <years> 3 1 Specify the service principal name. 2 Specify the subscription ID. 3 Specify the number of years. By default, a service principal expires in one year. By using the --years option you can extend the validity of your service principal. Example output Creating 'Contributor' role assignment under scope '/subscriptions/<subscription_id>' The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see https://aka.ms/azadsp-cli { "appId": "ac461d78-bf4b-4387-ad16-7e32e328aec6", "displayName": <service_principal>", "password": "00000000-0000-0000-0000-000000000000", "tenantId": "8049c7e9-c3de-762d-a54e-dc3f6be6a7ee" } Record the values of the appId and password parameters from the output. You need these values during OpenShift Container Platform installation. Additional resources For more information about CCO modes, see About the Cloud Credential Operator . 7.2.5. steps Install an OpenShift Container Platform cluster: Installing a cluster quickly on Azure Stack Hub . Install an OpenShift Container Platform cluster on Azure Stack Hub with user-provisioned infrastructure by following Installing a cluster on Azure Stack Hub using ARM templates . 7.3. Installing a cluster on Azure Stack Hub with an installer-provisioned infrastructure In OpenShift Container Platform version 4.10, you can install a cluster on Microsoft Azure Stack Hub with an installer-provisioned infrastructure. However, you must manually configure the install-config.yaml file to specify values that are specific to Azure Stack Hub. Note While you can select azure when using the installation program to deploy a cluster using installer-provisioned infrastructure, this option is only supported for the Azure Public Cloud. 7.3.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an Azure Stack Hub account to host the cluster. If you use a firewall, you configured it to allow the sites that your cluster requires access to. You verified that you have approximately 16 GB of local disk space. Installing the cluster requires that you download the RHCOS virtual hard disk (VHD) cluster image and upload it to your Azure Stack Hub environment so that it is accessible during deployment. Decompressing the VHD files requires this amount of local disk space. 7.3.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.10, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 7.3.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 7.3.4. Uploading the RHCOS cluster image You must download the RHCOS virtual hard disk (VHD) cluster image and upload it to your Azure Stack Hub environment so that it is accessible during deployment. Prerequisites Configure an Azure account. Procedure Obtain the RHCOS VHD cluster image: Export the URL of the RHCOS VHD to an environment variable. USD export COMPRESSED_VHD_URL=USD(openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.artifacts.azurestack.formats."vhd.gz".disk.location') Download the compressed RHCOS VHD file locally. USD curl -O -L USD{COMPRESSED_VHD_URL} Decompress the VHD file. Note The decompressed VHD file is approximately 16 GB, so be sure that your host system has 16 GB of free space available. The VHD file can be deleted once you have uploaded it. Upload the local VHD to the Azure Stack Hub environment, making sure that the blob is publicly available. For example, you can upload the VHD to a blob using the az cli or the web portal. 7.3.5. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on a local computer. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select Azure as the cloud provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 7.3.6. Manually creating the installation configuration file When installing OpenShift Container Platform on Microsoft Azure Stack Hub, you must manually create your installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Make the following modifications: Specify the required installation parameters. Update the platform.azure section to specify the parameters that are specific to Azure Stack Hub. Optional: Update one or more of the default configuration parameters to customize the installation. For more information about the parameters, see "Installation configuration parameters". Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. 7.3.6.1. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide a customized install-config.yaml installation configuration file that describes the details for your environment. Note After installation, you cannot modify these parameters in the install-config.yaml file. 7.3.6.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 7.1. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installer may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . platform The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , openstack , ovirt , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 7.3.6.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Table 7.2. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The cluster network provider Container Network Interface (CNI) plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI provider for all-Linux networks. OVNKubernetes is a CNI provider for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OpenShiftSDN . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 7.3.6.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 7.3. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String cgroupsV2 Enables Linux control groups version 2 (cgroups v2) on specific nodes in your cluster. The OpenShift Container Platform process for enabling cgroups v2 disables all cgroup version 1 controllers and hierarchies. The OpenShift Container Platform cgroups version 2 feature is in Developer Preview and is not supported by Red Hat at this time. true compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms and IBM Cloud VPC. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . sshKey The SSH key or keys to authenticate access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. One or more keys. For example: 7.3.6.1.4. Additional Azure Stack Hub configuration parameters Additional Azure configuration parameters are described in the following table: Table 7.4. Additional Azure Stack Hub parameters Parameter Description Values compute.platform.azure.osDisk.diskSizeGB The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 128 . compute.platform.azure.osDisk.diskType Defines the type of disk. standard_LRS or premium_LRS . The default is premium_LRS . controlPlane.platform.azure.osDisk.diskSizeGB The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 1024 . controlPlane.platform.azure.osDisk.diskType Defines the type of disk. premium_LRS . platform.azure.armEndpoint The URL of the Azure Resource Manager endpoint that your Azure Stack Hub operator provides. String platform.azure.baseDomainResourceGroupName The name of the resource group that contains the DNS zone for your base domain. String, for example production_cluster . platform.azure.region The name of your Azure Stack Hub local region. String platform.azure.resourceGroupName The name of an already existing resource group to install your cluster to. This resource group must be empty and only used for this specific cluster; the cluster components assume ownership of all resources in the resource group. If you limit the service principal scope of the installation program to this resource group, you must ensure all other resources used by the installation program in your environment have the necessary permissions, such as the public DNS zone and virtual network. Destroying the cluster by using the installation program deletes this resource group. String, for example existing_resource_group . platform.azure.outboundType The outbound routing strategy used to connect your cluster to the internet. If you are using user-defined routing, you must have pre-existing networking available where the outbound routing has already been configured prior to installing a cluster. The installation program is not responsible for configuring user-defined routing. LoadBalancer or UserDefinedRouting . The default is LoadBalancer . platform.azure.cloudName The name of the Azure cloud environment that is used to configure the Azure SDK with the appropriate Azure API endpoints. AzureStackCloud clusterOSImage The URL of a storage blob in the Azure Stack environment that contains an RHCOS VHD. String, for example, https://vhdsa.blob.example.example.com/vhd/rhcos-410.84.202112040202-0-azurestack.x86_64.vhd 7.3.6.2. Sample customized install-config.yaml file for Azure Stack Hub You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. Use it as a resource to enter parameter values into the installation configuration file that you created manually. apiVersion: v1 baseDomain: example.com 1 credentialsMode: Manual controlPlane: 2 3 name: master platform: azure: osDisk: diskSizeGB: 1024 4 diskType: premium_LRS replicas: 3 compute: 5 - name: worker platform: azure: osDisk: diskSizeGB: 512 6 diskType: premium_LRS replicas: 3 metadata: name: test-cluster 7 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: azure: armEndpoint: azurestack_arm_endpoint 9 10 baseDomainResourceGroupName: resource_group 11 12 region: azure_stack_local_region 13 14 resourceGroupName: existing_resource_group 15 outboundType: Loadbalancer cloudName: AzureStackCloud 16 clusterOSimage: https://vhdsa.blob.example.example.com/vhd/rhcos-410.84.202112040202-0-azurestack.x86_64.vhd 17 18 pullSecret: '{"auths": ...}' 19 20 fips: false 21 sshKey: ssh-ed25519 AAAA... 22 additionalTrustBundle: | 23 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- 1 7 9 11 13 16 17 19 Required. 2 5 If you do not provide these parameters and values, the installation program provides the default value. 3 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Although both sections currently define a single machine pool, it is possible that future versions of OpenShift Container Platform will support defining multiple compute pools during installation. Only one control plane pool is used. 4 6 You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes is 1024 GB. 8 The name of the cluster. 10 The Azure Resource Manager endpoint that your Azure Stack Hub operator provides. 12 The name of the resource group that contains the DNS zone for your base domain. 14 The name of your Azure Stack Hub local region. 15 The name of an existing resource group to install your cluster to. If undefined, a new resource group is created for the cluster. 18 The URL of a storage blob in the Azure Stack environment that contains an RHCOS VHD. 20 The pull secret required to authenticate your cluster. 21 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. 22 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 23 If the Azure Stack Hub environment is using an internal Certificate Authority (CA), adding the CA certificate is required. 7.3.7. Manually manage cloud credentials The Cloud Credential Operator (CCO) only supports your cloud provider in manual mode. As a result, you must specify the identity and access management (IAM) secrets for your cloud provider. Procedure Generate the manifests by running the following command from the directory that contains the installation program: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. From the directory that contains the installation program, obtain details of the OpenShift Container Platform release image that your openshift-install binary is built to use by running the following command: USD openshift-install version Example output release image quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 Locate all CredentialsRequest objects in this release image that target the cloud you are deploying on by running the following command: USD oc adm release extract quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 \ --credentials-requests \ --cloud=azure This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component-credentials-request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component-credentials-request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor ... secretRef: name: <component-secret> namespace: <component-namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component-secret> namespace: <component-namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region> Important The release image includes CredentialsRequest objects for Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set. You can identify these objects by their use of the release.openshift.io/feature-gate: TechPreviewNoUpgrade annotation. If you are not using any of these features, do not create secrets for these objects. Creating secrets for Technology Preview features that you are not using can cause the installation to fail. If you are using any of these features, you must create secrets for the corresponding objects. To find CredentialsRequest objects with the TechPreviewNoUpgrade annotation, run the following command: USD grep "release.openshift.io/feature-gate" * Example output 0000_30_capi-operator_00_credentials-request.yaml: release.openshift.io/feature-gate: TechPreviewNoUpgrade Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. Additional resources Updating cloud provider resources with manually maintained credentials 7.3.8. Configuring the cluster to use an internal CA If the Azure Stack Hub environment is using an internal Certificate Authority (CA), update the cluster-proxy-01-config.yaml file to configure the cluster to use the internal CA. Prerequisites Create the install-config.yaml file and specify the certificate trust bundle in .pem format. Create the cluster manifests. Procedure From the directory in which the installation program creates files, go to the manifests directory. Add user-ca-bundle to the spec.trustedCA.name field. Example cluster-proxy-01-config.yaml file apiVersion: config.openshift.io/v1 kind: Proxy metadata: creationTimestamp: null name: cluster spec: trustedCA: name: user-ca-bundle status: {} Optional: Back up the manifests/ cluster-proxy-01-config.yaml file. The installation program consumes the manifests/ directory when you deploy the cluster. 7.3.9. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Configure an account with the cloud platform that hosts your cluster. Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Note If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the kubeadmin user, display in your terminal. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL" INFO Time elapsed: 36m22s Note The cluster access and credential information also outputs to <installation_directory>/.openshift_install.log when an installation succeeds. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Important You must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. 7.3.10. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.10. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.10 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.10 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.10 MacOSX Client entry and save the file. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 7.3.11. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 7.3.12. Logging in to the cluster by using the web console The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console. Prerequisites You have access to the installation host. You completed a cluster installation and all cluster Operators are available. Procedure Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host: USD cat <installation_directory>/auth/kubeadmin-password Note Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host. List the OpenShift Container Platform web console route: USD oc get routes -n openshift-console | grep 'console-openshift' Note Alternatively, you can obtain the OpenShift Container Platform route from the <installation_directory>/.openshift_install.log log file on the installation host. Example output console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin user. Additional resources Accessing the web console 7.3.13. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.10, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources About remote health monitoring 7.3.14. steps Validating an installation . Customize your cluster . If necessary, you can opt out of remote health reporting . If necessary, you can remove cloud provider credentials . 7.4. Installing a cluster on Azure Stack Hub with network customizations In OpenShift Container Platform version 4.10, you can install a cluster with a customized network configuration on infrastructure that the installation program provisions on Azure Stack Hub. By customizing your network configuration, your cluster can coexist with existing IP address allocations in your environment and integrate with existing MTU and VXLAN configurations. Note While you can select azure when using the installation program to deploy a cluster using installer-provisioned infrastructure, this option is only supported for the Azure Public Cloud. 7.4.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an Azure Stack Hub account to host the cluster. If you use a firewall, you configured it to allow the sites that your cluster requires access to. You verified that you have approximately 16 GB of local disk space. Installing the cluster requires that you download the RHCOS virtual hard disk (VHD) cluster image and upload it to your Azure Stack Hub environment so that it is accessible during deployment. Decompressing the VHD files requires this amount of local disk space. 7.4.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.10, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 7.4.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 7.4.4. Uploading the RHCOS cluster image You must download the RHCOS virtual hard disk (VHD) cluster image and upload it to your Azure Stack Hub environment so that it is accessible during deployment. Prerequisites Configure an Azure account. Procedure Obtain the RHCOS VHD cluster image: Export the URL of the RHCOS VHD to an environment variable. USD export COMPRESSED_VHD_URL=USD(openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.artifacts.azurestack.formats."vhd.gz".disk.location') Download the compressed RHCOS VHD file locally. USD curl -O -L USD{COMPRESSED_VHD_URL} Decompress the VHD file. Note The decompressed VHD file is approximately 16 GB, so be sure that your host system has 16 GB of free space available. The VHD file can be deleted once you have uploaded it. Upload the local VHD to the Azure Stack Hub environment, making sure that the blob is publicly available. For example, you can upload the VHD to a blob using the az cli or the web portal. 7.4.5. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on a local computer. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select Azure as the cloud provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 7.4.6. Manually creating the installation configuration file When installing OpenShift Container Platform on Microsoft Azure Stack Hub, you must manually create your installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Make the following modifications: Specify the required installation parameters. Update the platform.azure section to specify the parameters that are specific to Azure Stack Hub. Optional: Update one or more of the default configuration parameters to customize the installation. For more information about the parameters, see "Installation configuration parameters". Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. 7.4.6.1. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide a customized install-config.yaml installation configuration file that describes the details for your environment. Note After installation, you cannot modify these parameters in the install-config.yaml file. 7.4.6.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 7.5. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installer may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . platform The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , openstack , ovirt , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 7.4.6.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Table 7.6. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The cluster network provider Container Network Interface (CNI) plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI provider for all-Linux networks. OVNKubernetes is a CNI provider for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OpenShiftSDN . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 7.4.6.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 7.7. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String cgroupsV2 Enables Linux control groups version 2 (cgroups v2) on specific nodes in your cluster. The OpenShift Container Platform process for enabling cgroups v2 disables all cgroup version 1 controllers and hierarchies. The OpenShift Container Platform cgroups version 2 feature is in Developer Preview and is not supported by Red Hat at this time. true compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms and IBM Cloud VPC. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . sshKey The SSH key or keys to authenticate access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. One or more keys. For example: 7.4.6.1.4. Additional Azure Stack Hub configuration parameters Additional Azure configuration parameters are described in the following table: Table 7.8. Additional Azure Stack Hub parameters Parameter Description Values compute.platform.azure.osDisk.diskSizeGB The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 128 . compute.platform.azure.osDisk.diskType Defines the type of disk. standard_LRS or premium_LRS . The default is premium_LRS . controlPlane.platform.azure.osDisk.diskSizeGB The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 1024 . controlPlane.platform.azure.osDisk.diskType Defines the type of disk. premium_LRS . platform.azure.armEndpoint The URL of the Azure Resource Manager endpoint that your Azure Stack Hub operator provides. String platform.azure.baseDomainResourceGroupName The name of the resource group that contains the DNS zone for your base domain. String, for example production_cluster . platform.azure.region The name of your Azure Stack Hub local region. String platform.azure.resourceGroupName The name of an already existing resource group to install your cluster to. This resource group must be empty and only used for this specific cluster; the cluster components assume ownership of all resources in the resource group. If you limit the service principal scope of the installation program to this resource group, you must ensure all other resources used by the installation program in your environment have the necessary permissions, such as the public DNS zone and virtual network. Destroying the cluster by using the installation program deletes this resource group. String, for example existing_resource_group . platform.azure.outboundType The outbound routing strategy used to connect your cluster to the internet. If you are using user-defined routing, you must have pre-existing networking available where the outbound routing has already been configured prior to installing a cluster. The installation program is not responsible for configuring user-defined routing. LoadBalancer or UserDefinedRouting . The default is LoadBalancer . platform.azure.cloudName The name of the Azure cloud environment that is used to configure the Azure SDK with the appropriate Azure API endpoints. AzureStackCloud clusterOSImage The URL of a storage blob in the Azure Stack environment that contains an RHCOS VHD. String, for example, https://vhdsa.blob.example.example.com/vhd/rhcos-410.84.202112040202-0-azurestack.x86_64.vhd 7.4.6.2. Sample customized install-config.yaml file for Azure Stack Hub You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. Use it as a resource to enter parameter values into the installation configuration file that you created manually. apiVersion: v1 baseDomain: example.com 1 credentialsMode: Manual controlPlane: 2 3 name: master platform: azure: osDisk: diskSizeGB: 1024 4 diskType: premium_LRS replicas: 3 compute: 5 - name: worker platform: azure: osDisk: diskSizeGB: 512 6 diskType: premium_LRS replicas: 3 metadata: name: test-cluster 7 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: azure: armEndpoint: azurestack_arm_endpoint 9 10 baseDomainResourceGroupName: resource_group 11 12 region: azure_stack_local_region 13 14 resourceGroupName: existing_resource_group 15 outboundType: Loadbalancer cloudName: AzureStackCloud 16 clusterOSimage: https://vhdsa.blob.example.example.com/vhd/rhcos-410.84.202112040202-0-azurestack.x86_64.vhd 17 18 pullSecret: '{"auths": ...}' 19 20 fips: false 21 sshKey: ssh-ed25519 AAAA... 22 additionalTrustBundle: | 23 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- 1 7 9 11 13 16 17 19 Required. 2 5 If you do not provide these parameters and values, the installation program provides the default value. 3 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Although both sections currently define a single machine pool, it is possible that future versions of OpenShift Container Platform will support defining multiple compute pools during installation. Only one control plane pool is used. 4 6 You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes is 1024 GB. 8 The name of the cluster. 10 The Azure Resource Manager endpoint that your Azure Stack Hub operator provides. 12 The name of the resource group that contains the DNS zone for your base domain. 14 The name of your Azure Stack Hub local region. 15 The name of an existing resource group to install your cluster to. If undefined, a new resource group is created for the cluster. 18 The URL of a storage blob in the Azure Stack environment that contains an RHCOS VHD. 20 The pull secret required to authenticate your cluster. 21 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. 22 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 23 If the Azure Stack Hub environment is using an internal Certificate Authority (CA), adding the CA certificate is required. 7.4.7. Manually manage cloud credentials The Cloud Credential Operator (CCO) only supports your cloud provider in manual mode. As a result, you must specify the identity and access management (IAM) secrets for your cloud provider. Procedure Generate the manifests by running the following command from the directory that contains the installation program: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. From the directory that contains the installation program, obtain details of the OpenShift Container Platform release image that your openshift-install binary is built to use by running the following command: USD openshift-install version Example output release image quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 Locate all CredentialsRequest objects in this release image that target the cloud you are deploying on by running the following command: USD oc adm release extract quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 \ --credentials-requests \ --cloud=azure This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component-credentials-request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component-credentials-request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor ... secretRef: name: <component-secret> namespace: <component-namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component-secret> namespace: <component-namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region> Important The release image includes CredentialsRequest objects for Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set. You can identify these objects by their use of the release.openshift.io/feature-gate: TechPreviewNoUpgrade annotation. If you are not using any of these features, do not create secrets for these objects. Creating secrets for Technology Preview features that you are not using can cause the installation to fail. If you are using any of these features, you must create secrets for the corresponding objects. To find CredentialsRequest objects with the TechPreviewNoUpgrade annotation, run the following command: USD grep "release.openshift.io/feature-gate" * Example output 0000_30_capi-operator_00_credentials-request.yaml: release.openshift.io/feature-gate: TechPreviewNoUpgrade Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. Additional resources Updating a cluster using the web console Updating a cluster using the CLI 7.4.8. Configuring the cluster to use an internal CA If the Azure Stack Hub environment is using an internal Certificate Authority (CA), update the cluster-proxy-01-config.yaml file to configure the cluster to use the internal CA. Prerequisites Create the install-config.yaml file and specify the certificate trust bundle in .pem format. Create the cluster manifests. Procedure From the directory in which the installation program creates files, go to the manifests directory. Add user-ca-bundle to the spec.trustedCA.name field. Example cluster-proxy-01-config.yaml file apiVersion: config.openshift.io/v1 kind: Proxy metadata: creationTimestamp: null name: cluster spec: trustedCA: name: user-ca-bundle status: {} Optional: Back up the manifests/ cluster-proxy-01-config.yaml file. The installation program consumes the manifests/ directory when you deploy the cluster. 7.4.9. Network configuration phases There are two phases prior to OpenShift Container Platform installation where you can customize the network configuration. Phase 1 You can customize the following network-related fields in the install-config.yaml file before you create the manifest files: networking.networkType networking.clusterNetwork networking.serviceNetwork networking.machineNetwork For more information on these fields, refer to Installation configuration parameters . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. Important The CIDR range 172.17.0.0/16 is reserved by libVirt. You cannot use this range or any range that overlaps with this range for any networks in your cluster. Phase 2 After creating the manifest files by running openshift-install create manifests , you can define a customized Cluster Network Operator manifest with only the fields you want to modify. You can use the manifest to specify advanced network configuration. You cannot override the values specified in phase 1 in the install-config.yaml file during phase 2. However, you can further customize the cluster network provider during phase 2. 7.4.10. Specifying advanced network configuration You can use advanced network configuration for your cluster network provider to integrate your cluster into your existing network environment. You can specify advanced network configuration only before you install the cluster. Important Customizing your network configuration by modifying the OpenShift Container Platform manifest files created by the installation program is not supported. Applying a manifest file that you create, as in the following procedure, is supported. Prerequisites You have created the install-config.yaml file and completed any modifications to it. Procedure Change to the directory that contains the installation program and create the manifests: USD ./openshift-install create manifests --dir <installation_directory> 1 1 <installation_directory> specifies the name of the directory that contains the install-config.yaml file for your cluster. Create a stub manifest file for the advanced network configuration that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: Specify the advanced network configuration for your cluster in the cluster-network-03-config.yml file, such as in the following examples: Specify a different VXLAN port for the OpenShift SDN network provider apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: vxlanPort: 4800 Enable IPsec for the OVN-Kubernetes network provider apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: {} Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program consumes the manifests/ directory when you create the Ignition config files. 7.4.11. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group and these fields cannot be changed: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network provider, such as OpenShift SDN or OVN-Kubernetes. You can specify the cluster network provider configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 7.4.11.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 7.9. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.serviceNetwork array A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes Container Network Interface (CNI) network providers support only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.defaultNetwork object Configures the Container Network Interface (CNI) cluster network provider for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network provider, the kube-proxy configuration has no effect. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 7.10. defaultNetwork object Field Type Description type string Either OpenShiftSDN or OVNKubernetes . The cluster network provider is selected during installation. This value cannot be changed after cluster installation. Note OpenShift Container Platform uses the OpenShift SDN Container Network Interface (CNI) cluster network provider by default. openshiftSDNConfig object This object is only valid for the OpenShift SDN cluster network provider. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes cluster network provider. Configuration for the OpenShift SDN CNI cluster network provider The following table describes the configuration fields for the OpenShift SDN Container Network Interface (CNI) cluster network provider. Table 7.11. openshiftSDNConfig object Field Type Description mode string Configures the network isolation mode for OpenShift SDN. The default value is NetworkPolicy . The values Multitenant and Subnet are available for backwards compatibility with OpenShift Container Platform 3.x but are not recommended. This value cannot be changed after cluster installation. mtu integer The maximum transmission unit (MTU) for the VXLAN overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 50 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1450 . This value cannot be changed after cluster installation. vxlanPort integer The port to use for all VXLAN packets. The default value is 4789 . This value cannot be changed after cluster installation. If you are running in a virtualized environment with existing nodes that are part of another VXLAN network, then you might be required to change this. For example, when running an OpenShift SDN overlay on top of VMware NSX-T, you must select an alternate port for the VXLAN, because both SDNs use the same default VXLAN port number. On Amazon Web Services (AWS), you can select an alternate port for the VXLAN between port 9000 and port 9999 . Example OpenShift SDN configuration defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789 Configuration for the OVN-Kubernetes CNI cluster network provider The following table describes the configuration fields for the OVN-Kubernetes CNI cluster network provider. Table 7.12. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. ipsecConfig object Specify an empty object to enable IPsec encryption. This value cannot be changed after cluster installation. policyAuditConfig object Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. gatewayConfig object Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway. Note Table 7.13. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . Table 7.14. gatewayConfig object Field Type Description routingViaHost boolean Set this field to true to send egress traffic from pods to the host networking stack. For highly-specialized installations and applications that rely on manually configured routes in the kernel routing table, you might want to route egress traffic to the host networking stack. By default, egress traffic is processed in OVN to exit the cluster and is not affected by specialized routes in the kernel routing table. The default value is false . This field has an interaction with the Open vSwitch hardware offloading feature. If you set this field to true , you do not receive the performance benefits of the offloading because egress traffic is processed by the host networking stack. Example OVN-Kubernetes configuration with IPSec enabled defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {} kubeProxyConfig object configuration The values for the kubeProxyConfig object are defined in the following table: Table 7.15. kubeProxyConfig object Field Type Description iptablesSyncPeriod string The refresh period for iptables rules. The default value is 30s . Valid suffixes include s , m , and h and are described in the Go time package documentation. Note Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary. proxyArguments.iptables-min-sync-period array The minimum duration before refreshing iptables rules. This field ensures that the refresh does not happen too frequently. Valid suffixes include s , m , and h and are described in the Go time package . The default value is: kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s 7.4.12. Configuring hybrid networking with OVN-Kubernetes You can configure your cluster to use hybrid networking with OVN-Kubernetes. This allows a hybrid cluster that supports different node networking configurations. For example, this is necessary to run both Linux and Windows nodes in a cluster. Important You must configure hybrid networking with OVN-Kubernetes during the installation of your cluster. You cannot switch to hybrid networking after the installation process. Prerequisites You defined OVNKubernetes for the networking.networkType parameter in the install-config.yaml file. See the installation documentation for configuring OpenShift Container Platform network customizations on your chosen cloud provider for more information. Procedure Change to the directory that contains the installation program and create the manifests: USD ./openshift-install create manifests --dir <installation_directory> where: <installation_directory> Specifies the name of the directory that contains the install-config.yaml file for your cluster. Create a stub manifest file for the advanced network configuration that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory: USD cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: EOF where: <installation_directory> Specifies the directory name that contains the manifests/ directory for your cluster. Open the cluster-network-03-config.yml file in an editor and configure OVN-Kubernetes with hybrid networking, such as in the following example: Specify a hybrid networking configuration apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: hybridOverlayConfig: hybridClusterNetwork: 1 - cidr: 10.132.0.0/14 hostPrefix: 23 hybridOverlayVXLANPort: 9898 2 1 Specify the CIDR configuration used for nodes on the additional overlay network. The hybridClusterNetwork CIDR cannot overlap with the clusterNetwork CIDR. 2 Specify a custom VXLAN port for the additional overlay network. This is required for running Windows nodes in a cluster installed on vSphere, and must not be configured for any other cloud provider. The custom port can be any open port excluding the default 4789 port. For more information on this requirement, see the Microsoft documentation on Pod-to-pod connectivity between hosts is broken . Note Windows Server Long-Term Servicing Channel (LTSC): Windows Server 2019 is not supported on clusters with a custom hybridOverlayVXLANPort value because this Windows server version does not support selecting a custom VXLAN port. Save the cluster-network-03-config.yml file and quit the text editor. Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program deletes the manifests/ directory when creating the cluster. Note For more information on using Linux and Windows nodes in the same cluster, see Understanding Windows container workloads . 7.4.13. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Configure an account with the cloud platform that hosts your cluster. Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Note If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the kubeadmin user, display in your terminal. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL" INFO Time elapsed: 36m22s Note The cluster access and credential information also outputs to <installation_directory>/.openshift_install.log when an installation succeeds. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Important You must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. 7.4.14. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.10. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.10 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.10 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.10 MacOSX Client entry and save the file. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 7.4.15. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 7.4.16. Logging in to the cluster by using the web console The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console. Prerequisites You have access to the installation host. You completed a cluster installation and all cluster Operators are available. Procedure Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host: USD cat <installation_directory>/auth/kubeadmin-password Note Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host. List the OpenShift Container Platform web console route: USD oc get routes -n openshift-console | grep 'console-openshift' Note Alternatively, you can obtain the OpenShift Container Platform route from the <installation_directory>/.openshift_install.log log file on the installation host. Example output console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin user. Additional resources Accessing the web console . 7.4.17. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.10, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources About remote health monitoring 7.4.18. steps Validating an installation . Customize your cluster . If necessary, you can opt out of remote health reporting . If necessary, you can remove cloud provider credentials . 7.5. Installing a cluster on Azure Stack Hub using ARM templates In OpenShift Container Platform version 4.10, you can install a cluster on Microsoft Azure Stack Hub by using infrastructure that you provide. Several Azure Resource Manager (ARM) templates are provided to assist in completing these steps or to help model your own. Important The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the cloud provider and the installation process of OpenShift Container Platform. Several ARM templates are provided to assist in completing these steps or to help model your own. You are also free to create the required resources through other methods; the templates are just an example. 7.5.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an Azure Stack Hub account to host the cluster. You downloaded the Azure CLI and installed it on your computer. See Install the Azure CLI in the Azure documentation. The documentation below was tested using version 2.28.0 of the Azure CLI. Azure CLI commands might perform differently based on the version you use. If you use a firewall and plan to use the Telemetry service, you configured the firewall to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 7.5.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.10, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 7.5.3. Configuring your Azure Stack Hub project Before you can install OpenShift Container Platform, you must configure an Azure project to host it. Important All Azure Stack Hub resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure Stack Hub restricts, see Resolve reserved resource name errors in the Azure documentation. 7.5.3.1. Azure Stack Hub account limits The OpenShift Container Platform cluster uses a number of Microsoft Azure Stack Hub components, and the default Quota types in Azure Stack Hub affect your ability to install OpenShift Container Platform clusters. The following table summarizes the Azure Stack Hub components whose limits can impact your ability to install and run OpenShift Container Platform clusters. Component Number of components required by default Description vCPU 56 A default cluster requires 56 vCPUs, so you must increase the account limit. By default, each cluster creates the following instances: One bootstrap machine, which is removed after installation Three control plane machines Three compute machines Because the bootstrap, control plane, and worker machines use Standard_DS4_v2 virtual machines, which use 8 vCPUs, a default cluster requires 56 vCPUs. The bootstrap node VM is used only during installation. To deploy more worker nodes, enable autoscaling, deploy large workloads, or use a different instance type, you must further increase the vCPU limit for your account to ensure that your cluster can deploy the machines that you require. VNet 1 Each default cluster requires one Virtual Network (VNet), which contains two subnets. Network interfaces 7 Each default cluster requires seven network interfaces. If you create more machines or your deployed workloads create load balancers, your cluster uses more network interfaces. Network security groups 2 Each cluster creates network security groups for each subnet in the VNet. The default cluster creates network security groups for the control plane and for the compute node subnets: controlplane Allows the control plane machines to be reached on port 6443 from anywhere node Allows worker nodes to be reached from the internet on ports 80 and 443 Network load balancers 3 Each cluster creates the following load balancers : default Public IP address that load balances requests to ports 80 and 443 across worker machines internal Private IP address that load balances requests to ports 6443 and 22623 across control plane machines external Public IP address that load balances requests to port 6443 across control plane machines If your applications create more Kubernetes LoadBalancer service objects, your cluster uses more load balancers. Public IP addresses 2 The public load balancer uses a public IP address. The bootstrap machine also uses a public IP address so that you can SSH into the machine to troubleshoot issues during installation. The IP address for the bootstrap node is used only during installation. Private IP addresses 7 The internal load balancer, each of the three control plane machines, and each of the three worker machines each use a private IP address. Additional resources Optimizing storage . 7.5.3.2. Configuring a DNS zone in Azure Stack Hub To successfully install OpenShift Container Platform on Azure Stack Hub, you must create DNS records in an Azure Stack Hub DNS zone. The DNS zone must be authoritative for the domain. To delegate a registrar's DNS zone to Azure Stack Hub, see Microsoft's documentation for Azure Stack Hub datacenter DNS integration . You can view Azure's DNS solution by visiting this example for creating DNS zones . 7.5.3.3. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 7.5.3.4. Required Azure Stack Hub roles Your Microsoft Azure Stack Hub account must have the following roles for the subscription that you use: Owner To set roles on the Azure portal, see the Manage access to resources in Azure Stack Hub with role-based access control in the Microsoft documentation. 7.5.3.5. Creating a service principal Because OpenShift Container Platform and its installation program create Microsoft Azure resources by using the Azure Resource Manager, you must create a service principal to represent it. Prerequisites Install or update the Azure CLI . Your Azure account has the required roles for the subscription that you use. Procedure Register your environment: USD az cloud register -n AzureStackCloud --endpoint-resource-manager <endpoint> 1 1 Specify the Azure Resource Manager endpoint, `https://management.<region>.<fqdn>/`. See the Microsoft documentation for details. Set the active environment: USD az cloud set -n AzureStackCloud Update your environment configuration to use the specific API version for Azure Stack Hub: USD az cloud update --profile 2019-03-01-hybrid Log in to the Azure CLI: USD az login If you are in a multitenant environment, you must also supply the tenant ID. If your Azure account uses subscriptions, ensure that you are using the right subscription: View the list of available accounts and record the tenantId value for the subscription you want to use for your cluster: USD az account list --refresh Example output [ { "cloudName": AzureStackCloud", "id": "9bab1460-96d5-40b3-a78e-17b15e978a80", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee", "user": { "name": "[email protected]", "type": "user" } } ] View your active account details and confirm that the tenantId value matches the subscription you want to use: USD az account show Example output { "environmentName": AzureStackCloud", "id": "9bab1460-96d5-40b3-a78e-17b15e978a80", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee", 1 "user": { "name": "[email protected]", "type": "user" } } 1 Ensure that the value of the tenantId parameter is the correct subscription ID. If you are not using the right subscription, change the active subscription: USD az account set -s <subscription_id> 1 1 Specify the subscription ID. Verify the subscription ID update: USD az account show Example output { "environmentName": AzureStackCloud", "id": "33212d16-bdf6-45cb-b038-f6565b61edda", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "8049c7e9-c3de-762d-a54e-dc3f6be6a7ee", "user": { "name": "[email protected]", "type": "user" } } Record the tenantId and id parameter values from the output. You need these values during the OpenShift Container Platform installation. Create the service principal for your account: USD az ad sp create-for-rbac --role Contributor --name <service_principal> \ 1 --scopes /subscriptions/<subscription_id> 2 --years <years> 3 1 Specify the service principal name. 2 Specify the subscription ID. 3 Specify the number of years. By default, a service principal expires in one year. By using the --years option you can extend the validity of your service principal. Example output Creating 'Contributor' role assignment under scope '/subscriptions/<subscription_id>' The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see https://aka.ms/azadsp-cli { "appId": "ac461d78-bf4b-4387-ad16-7e32e328aec6", "displayName": <service_principal>", "password": "00000000-0000-0000-0000-000000000000", "tenantId": "8049c7e9-c3de-762d-a54e-dc3f6be6a7ee" } Record the values of the appId and password parameters from the output. You need these values during OpenShift Container Platform installation. Additional resources For more information about CCO modes, see About the Cloud Credential Operator . 7.5.4. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on a local computer. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select Azure as the cloud provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 7.5.5. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 7.5.6. Creating the installation files for Azure Stack Hub To install OpenShift Container Platform on Microsoft Azure Stack Hub using user-provisioned infrastructure, you must generate the files that the installation program needs to deploy your cluster and modify them so that the cluster creates only the machines that it will use. You manually create the install-config.yaml file, and then generate and customize the Kubernetes manifests and Ignition config files. You also have the option to first set up a separate var partition during the preparation phases of installation. 7.5.6.1. Manually creating the installation configuration file Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Make the following modifications for Azure Stack Hub: Set the replicas parameter to 0 for the compute pool: compute: - hyperthreading: Enabled name: worker platform: {} replicas: 0 1 1 Set to 0 . The compute machines will be provisioned manually later. Update the platform.azure section of the install-config.yaml file to configure your Azure Stack Hub configuration: platform: azure: armEndpoint: <azurestack_arm_endpoint> 1 baseDomainResourceGroupName: <resource_group> 2 cloudName: AzureStackCloud 3 region: <azurestack_region> 4 1 Specify the Azure Resource Manager endpoint of your Azure Stack Hub environment, like https://management.local.azurestack.external . 2 Specify the name of the resource group that contains the DNS zone for your base domain. 3 Specify the Azure Stack Hub environment, which is used to configure the Azure SDK with the appropriate Azure API endpoints. 4 Specify the name of your Azure Stack Hub region. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. 7.5.6.2. Sample customized install-config.yaml file for Azure Stack Hub You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. Use it as a resource to enter parameter values into the installation configuration file that you created manually. apiVersion: v1 baseDomain: example.com controlPlane: 1 name: master platform: azure: osDisk: diskSizeGB: 1024 2 diskType: premium_LRS replicas: 3 compute: 3 - name: worker platform: azure: osDisk: diskSizeGB: 512 4 diskType: premium_LRS replicas: 0 metadata: name: test-cluster 5 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: azure: armEndpoint: azurestack_arm_endpoint 6 baseDomainResourceGroupName: resource_group 7 region: azure_stack_local_region 8 resourceGroupName: existing_resource_group 9 outboundType: Loadbalancer cloudName: AzureStackCloud 10 pullSecret: '{"auths": ...}' 11 fips: false 12 additionalTrustBundle: | 13 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- sshKey: ssh-ed25519 AAAA... 14 1 3 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 2 4 You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes is 1024 GB. 5 Specify the name of the cluster. 6 Specify the Azure Resource Manager endpoint that your Azure Stack Hub operator provides. 7 Specify the name of the resource group that contains the DNS zone for your base domain. 8 Specify the name of your Azure Stack Hub local region. 9 Specify the name of an already existing resource group to install your cluster to. If undefined, a new resource group is created for the cluster. 10 Specify the Azure Stack Hub environment as your target platform. 11 Specify the pull secret required to authenticate your cluster. 12 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. 13 If your Azure Stack Hub environment uses an internal certificate authority (CA), add the necessary certificate bundle in .pem format. 14 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 7.5.6.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ... 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 7.5.6.4. Exporting common variables for ARM templates You must export a common set of variables that are used with the provided Azure Resource Manager (ARM) templates used to assist in completing a user-provided infrastructure install on Microsoft Azure Stack Hub. Note Specific ARM templates can also require additional exported variables, which are detailed in their related procedures. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Export common variables found in the install-config.yaml to be used by the provided ARM templates: USD export CLUSTER_NAME=<cluster_name> 1 USD export AZURE_REGION=<azure_region> 2 USD export SSH_KEY=<ssh_key> 3 USD export BASE_DOMAIN=<base_domain> 4 USD export BASE_DOMAIN_RESOURCE_GROUP=<base_domain_resource_group> 5 1 The value of the .metadata.name attribute from the install-config.yaml file. 2 The region to deploy the cluster into. This is the value of the .platform.azure.region attribute from the install-config.yaml file. 3 The SSH RSA public key file as a string. You must enclose the SSH key in quotes since it contains spaces. This is the value of the .sshKey attribute from the install-config.yaml file. 4 The base domain to deploy the cluster to. The base domain corresponds to the DNS zone that you created for your cluster. This is the value of the .baseDomain attribute from the install-config.yaml file. 5 The resource group where the DNS zone exists. This is the value of the .platform.azure.baseDomainResourceGroupName attribute from the install-config.yaml file. For example: USD export CLUSTER_NAME=test-cluster USD export AZURE_REGION=centralus USD export SSH_KEY="ssh-rsa xxx/xxx/xxx= [email protected]" USD export BASE_DOMAIN=example.com USD export BASE_DOMAIN_RESOURCE_GROUP=ocp-cluster Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 7.5.6.5. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Remove the Kubernetes manifest files that define the control plane machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml By removing these files, you prevent the cluster from automatically generating control plane machines. Remove the Kubernetes manifest files that define the worker machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml Because you create and manage the worker machines yourself, you do not need to initialize these machines. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. Optional: If you do not want the Ingress Operator to create DNS records on your behalf, remove the privateZone and publicZone sections from the <installation_directory>/manifests/cluster-dns-02-config.yml DNS configuration file: apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {} 1 2 Remove this section completely. If you do so, you must add ingress DNS records manually in a later step. Optional: If your Azure Stack Hub environment uses an internal certificate authority (CA), you must update the .spec.trustedCA.name field in the <installation_directory>/manifests/cluster-proxy-01-config.yaml file to use user-ca-bundle : ... spec: trustedCA: name: user-ca-bundle ... Later, you must update your bootstrap ignition to include the CA. When configuring Azure on user-provisioned infrastructure, you must export some common variables defined in the manifest files to use later in the Azure Resource Manager (ARM) templates: Export the infrastructure ID by using the following command: USD export INFRA_ID=<infra_id> 1 1 The OpenShift Container Platform cluster has been assigned an identifier ( INFRA_ID ) in the form of <cluster_name>-<random_string> . This will be used as the base name for most resources created using the provided ARM templates. This is the value of the .status.infrastructureName attribute from the manifests/cluster-infrastructure-02-config.yml file. Export the resource group by using the following command: USD export RESOURCE_GROUP=<resource_group> 1 1 All resources created in this Azure deployment exists as part of a resource group . The resource group name is also based on the INFRA_ID , in the form of <cluster_name>-<random_string>-rg . This is the value of the .status.platformStatus.azure.resourceGroupName attribute from the manifests/cluster-infrastructure-02-config.yml file. Manually create your cloud credentials. From the directory that contains the installation program, obtain details of the OpenShift Container Platform release image that your openshift-install binary is built to use: USD openshift-install version Example output release image quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 Locate all CredentialsRequest objects in this release image that target the cloud you are deploying on: USD oc adm release extract quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 --credentials-requests --cloud=azure This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: "1.0" name: openshift-image-registry-azure namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. The format for the secret data varies for each cloud provider. Sample secrets.yaml file: apiVersion: v1 kind: Secret metadata: name: USD{secret_name} namespace: USD{secret_namespace} stringData: azure_subscription_id: USD{subscription_id} azure_client_id: USD{app_id} azure_client_secret: USD{client_secret} azure_tenant_id: USD{tenant_id} azure_resource_prefix: USD{cluster_name} azure_resourcegroup: USD{resource_group} azure_region: USD{azure_region} Important The release image includes CredentialsRequest objects for Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set. You can identify these objects by their use of the release.openshift.io/feature-gate: TechPreviewNoUpgrade annotation. If you are not using any of these features, do not create secrets for these objects. Creating secrets for Technology Preview features that you are not using can cause the installation to fail. If you are using any of these features, you must create secrets for the corresponding objects. To find CredentialsRequest objects with the TechPreviewNoUpgrade annotation, run the following command: USD grep "release.openshift.io/feature-gate" * Example output 0000_30_capi-operator_00_credentials-request.yaml: release.openshift.io/feature-gate: TechPreviewNoUpgrade Create a cco-configmap.yaml file in the manifests directory with the Cloud Credential Operator (CCO) disabled: Sample ConfigMap object apiVersion: v1 kind: ConfigMap metadata: name: cloud-credential-operator-config namespace: openshift-cloud-credential-operator annotations: release.openshift.io/create-only: "true" data: disabled: "true" To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: 7.5.6.6. Optional: Creating a separate /var partition It is recommended that disk partitioning for OpenShift Container Platform be left to the installer. However, there are cases where you might want to create separate partitions in a part of the filesystem that you expect to grow. OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var partition or a subdirectory of /var . For example: /var/lib/containers : Holds container-related content that can grow as more images and containers are added to a system. /var/lib/etcd : Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage. /var : Holds data that you might want to keep separate for purposes such as auditing. Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems. Because /var must be in place before a fresh installation of Red Hat Enterprise Linux CoreOS (RHCOS), the following procedure sets up the separate /var partition by creating a machine config manifest that is inserted during the openshift-install preparation phases of an OpenShift Container Platform installation. Important If you follow the steps to create a separate /var partition in this procedure, it is not necessary to create the Kubernetes manifest and Ignition config files again as described later in this section. Procedure Create a directory to hold the OpenShift Container Platform installation files: USD mkdir USDHOME/clusterconfig Run openshift-install to create a set of files in the manifest and openshift subdirectories. Answer the system questions as you are prompted: USD openshift-install create manifests --dir USDHOME/clusterconfig Example output ? SSH Public Key ... INFO Credentials loaded from the "myprofile" profile in file "/home/myuser/.aws/credentials" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift Optional: Confirm that the installation program created manifests in the clusterconfig/openshift directory: USD ls USDHOME/clusterconfig/openshift/ Example output 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml ... Create a Butane config that configures the additional partition. For example, name the file USDHOME/clusterconfig/98-var-partition.bu , change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition: variant: openshift version: 4.10.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true 1 The storage device name of the disk that you want to partition. 2 When adding a data partition to the boot disk, a minimum value of 25000 MiB (Mebibytes) is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. 3 The size of the data partition in mebibytes. 4 The prjquota mount option must be enabled for filesystems used for container storage. Note When creating a separate /var partition, you cannot use different instance types for worker nodes, if the different instance types do not have the same device name. Create a manifest from the Butane config and save it to the clusterconfig/openshift directory. For example, run the following command: USD butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml Run openshift-install again to create Ignition configs from a set of files in the manifest and openshift subdirectories: USD openshift-install create ignition-configs --dir USDHOME/clusterconfig USD ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign Now you can use the Ignition config files as input to the installation procedures to install Red Hat Enterprise Linux CoreOS (RHCOS) systems. 7.5.7. Creating the Azure resource group You must create a Microsoft Azure resource group . This is used during the installation of your OpenShift Container Platform cluster on Azure Stack Hub. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Procedure Create the resource group in a supported Azure region: USD az group create --name USD{RESOURCE_GROUP} --location USD{AZURE_REGION} 7.5.8. Uploading the RHCOS cluster image and bootstrap Ignition config file The Azure client does not support deployments based on files existing locally. You must copy and store the RHCOS virtual hard disk (VHD) cluster image and bootstrap Ignition config file in a storage container so they are accessible during deployment. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Procedure Create an Azure storage account to store the VHD cluster image: USD az storage account create -g USD{RESOURCE_GROUP} --location USD{AZURE_REGION} --name USD{CLUSTER_NAME}sa --kind Storage --sku Standard_LRS Warning The Azure storage account name must be between 3 and 24 characters in length and use numbers and lower-case letters only. If your CLUSTER_NAME variable does not follow these restrictions, you must manually define the Azure storage account name. For more information on Azure storage account name restrictions, see Resolve errors for storage account names in the Azure documentation. Export the storage account key as an environment variable: USD export ACCOUNT_KEY=`az storage account keys list -g USD{RESOURCE_GROUP} --account-name USD{CLUSTER_NAME}sa --query "[0].value" -o tsv` Export the URL of the RHCOS VHD to an environment variable: USD export COMPRESSED_VHD_URL=USD(openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.artifacts.azurestack.formats."vhd.gz".disk.location') Important The RHCOS images might not change with every release of OpenShift Container Platform. You must specify an image with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image version that matches your OpenShift Container Platform version if it is available. Create the storage container for the VHD: USD az storage container create --name vhd --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} Download the compressed RHCOS VHD file locally: USD curl -O -L USD{COMPRESSED_VHD_URL} Decompress the VHD file. Note The decompressed VHD file is approximately 16 GB, so be sure that your host system has 16 GB of free space available. You can delete the VHD file after you upload it. Copy the local VHD to a blob: USD az storage blob upload --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c vhd -n "rhcos.vhd" -f rhcos-<rhcos_version>-azurestack.x86_64.vhd Create a blob storage container and upload the generated bootstrap.ign file: USD az storage container create --name files --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} USD az storage blob upload --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c "files" -f "<installation_directory>/bootstrap.ign" -n "bootstrap.ign" 7.5.9. Example for creating DNS zones DNS records are required for clusters that use user-provisioned infrastructure. You should choose the DNS strategy that fits your scenario. For this example, Azure Stack Hub's datacenter DNS integration is used, so you will create a DNS zone. Note The DNS zone is not required to exist in the same resource group as the cluster deployment and might already exist in your organization for the desired base domain. If that is the case, you can skip creating the DNS zone; be sure the installation config you generated earlier reflects that scenario. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Procedure Create the new DNS zone in the resource group exported in the BASE_DOMAIN_RESOURCE_GROUP environment variable: USD az network dns zone create -g USD{BASE_DOMAIN_RESOURCE_GROUP} -n USD{CLUSTER_NAME}.USD{BASE_DOMAIN} You can skip this step if you are using a DNS zone that already exists. You can learn more about configuring a DNS zone in Azure Stack Hub by visiting that section. 7.5.10. Creating a VNet in Azure Stack Hub You must create a virtual network (VNet) in Microsoft Azure Stack Hub for your OpenShift Container Platform cluster to use. You can customize the VNet to meet your requirements. One way to create the VNet is to modify the provided Azure Resource Manager (ARM) template. Note If you do not use the provided ARM template to create your Azure Stack Hub infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Procedure Copy the template from the ARM template for the VNet section of this topic and save it as 01_vnet.json in your cluster's installation directory. This template describes the VNet that your cluster requires. Create the deployment by using the az CLI: USD az deployment group create -g USD{RESOURCE_GROUP} \ --template-file "<installation_directory>/01_vnet.json" \ --parameters baseName="USD{INFRA_ID}" 1 1 The base name to be used in resource names; this is usually the cluster's infrastructure ID. 7.5.10.1. ARM template for the VNet You can use the following Azure Resource Manager (ARM) template to deploy the VNet that you need for your OpenShift Container Platform cluster: Example 7.1. 01_vnet.json ARM template link:https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/azurestack/01_vnet.json[] 7.5.11. Deploying the RHCOS cluster image for the Azure Stack Hub infrastructure You must use a valid Red Hat Enterprise Linux CoreOS (RHCOS) image for Microsoft Azure Stack Hub for your OpenShift Container Platform nodes. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Store the RHCOS virtual hard disk (VHD) cluster image in an Azure storage container. Store the bootstrap Ignition config file in an Azure storage container. Procedure Copy the template from the ARM template for image storage section of this topic and save it as 02_storage.json in your cluster's installation directory. This template describes the image storage that your cluster requires. Export the RHCOS VHD blob URL as a variable: USD export VHD_BLOB_URL=`az storage blob url --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c vhd -n "rhcos.vhd" -o tsv` Deploy the cluster image: USD az deployment group create -g USD{RESOURCE_GROUP} \ --template-file "<installation_directory>/02_storage.json" \ --parameters vhdBlobURL="USD{VHD_BLOB_URL}" \ 1 --parameters baseName="USD{INFRA_ID}" 2 1 The blob URL of the RHCOS VHD to be used to create master and worker machines. 2 The base name to be used in resource names; this is usually the cluster's infrastructure ID. 7.5.11.1. ARM template for image storage You can use the following Azure Resource Manager (ARM) template to deploy the stored Red Hat Enterprise Linux CoreOS (RHCOS) image that you need for your OpenShift Container Platform cluster: Example 7.2. 02_storage.json ARM template link:https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/azurestack/02_storage.json[] 7.5.12. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. 7.5.12.1. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Important In connected OpenShift Container Platform environments, all nodes are required to have internet access to pull images for platform containers and provide telemetry data to Red Hat. Table 7.16. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 7.17. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 7.18. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports 7.5.13. Creating networking and load balancing components in Azure Stack Hub You must configure networking and load balancing in Microsoft Azure Stack Hub for your OpenShift Container Platform cluster to use. One way to create these components is to modify the provided Azure Resource Manager (ARM) template. Load balancing requires the following DNS records: An api DNS record for the API public load balancer in the DNS zone. An api-int DNS record for the API internal load balancer in the DNS zone. Note If you do not use the provided ARM template to create your Azure Stack Hub infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Create and configure a VNet and associated subnets in Azure Stack Hub. Procedure Copy the template from the ARM template for the network and load balancers section of this topic and save it as 03_infra.json in your cluster's installation directory. This template describes the networking and load balancing objects that your cluster requires. Create the deployment by using the az CLI: USD az deployment group create -g USD{RESOURCE_GROUP} \ --template-file "<installation_directory>/03_infra.json" \ --parameters baseName="USD{INFRA_ID}" 1 1 The base name to be used in resource names; this is usually the cluster's infrastructure ID. Create an api DNS record and an api-int DNS record. When creating the API DNS records, the USD{BASE_DOMAIN_RESOURCE_GROUP} variable must point to the resource group where the DNS zone exists. Export the following variable: USD export PUBLIC_IP=`az network public-ip list -g USD{RESOURCE_GROUP} --query "[?name=='USD{INFRA_ID}-master-pip'] | [0].ipAddress" -o tsv` Export the following variable: USD export PRIVATE_IP=`az network lb frontend-ip show -g "USDRESOURCE_GROUP" --lb-name "USD{INFRA_ID}-internal" -n internal-lb-ip --query "privateIpAddress" -o tsv` Create the api DNS record in a new DNS zone: USD az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n api -a USD{PUBLIC_IP} --ttl 60 If you are adding the cluster to an existing DNS zone, you can create the api DNS record in it instead: USD az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{BASE_DOMAIN} -n api.USD{CLUSTER_NAME} -a USD{PUBLIC_IP} --ttl 60 Create the api-int DNS record in a new DNS zone: USD az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z "USD{CLUSTER_NAME}.USD{BASE_DOMAIN}" -n api-int -a USD{PRIVATE_IP} --ttl 60 If you are adding the cluster to an existing DNS zone, you can create the api-int DNS record in it instead: USD az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{BASE_DOMAIN} -n api-int.USD{CLUSTER_NAME} -a USD{PRIVATE_IP} --ttl 60 7.5.13.1. ARM template for the network and load balancers You can use the following Azure Resource Manager (ARM) template to deploy the networking objects and load balancers that you need for your OpenShift Container Platform cluster: Example 7.3. 03_infra.json ARM template link:https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/azurestack/03_infra.json[] 7.5.14. Creating the bootstrap machine in Azure Stack Hub You must create the bootstrap machine in Microsoft Azure Stack Hub to use during OpenShift Container Platform cluster initialization. One way to create this machine is to modify the provided Azure Resource Manager (ARM) template. Note If you do not use the provided ARM template to create your bootstrap machine, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Create and configure a VNet and associated subnets in Azure Stack Hub. Create and configure networking and load balancers in Azure Stack Hub. Create control plane and compute roles. Procedure Copy the template from the ARM template for the bootstrap machine section of this topic and save it as 04_bootstrap.json in your cluster's installation directory. This template describes the bootstrap machine that your cluster requires. Export the bootstrap URL variable: USD bootstrap_url_expiry=`date -u -d "10 hours" '+%Y-%m-%dT%H:%MZ'` USD export BOOTSTRAP_URL=`az storage blob generate-sas -c 'files' -n 'bootstrap.ign' --https-only --full-uri --permissions r --expiry USDbootstrap_url_expiry --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -o tsv` Export the bootstrap ignition variable: If your environment uses a public certificate authority (CA), run this command: USD export BOOTSTRAP_IGNITION=`jq -rcnM --arg v "3.2.0" --arg url USD{BOOTSTRAP_URL} '{ignition:{version:USDv,config:{replace:{source:USDurl}}}}' | base64 | tr -d '\n'` If your environment uses an internal CA, you must add your PEM encoded bundle to the bootstrap ignition stub so that your bootstrap virtual machine can pull the bootstrap ignition from the storage account. Run the following commands, which assume your CA is in a file called CA.pem : USD export CA="data:text/plain;charset=utf-8;base64,USD(cat CA.pem |base64 |tr -d '\n')" USD export BOOTSTRAP_IGNITION=`jq -rcnM --arg v "3.2.0" --arg url "USDBOOTSTRAP_URL" --arg cert "USDCA" '{ignition:{version:USDv,security:{tls:{certificateAuthorities:[{source:USDcert}]}},config:{replace:{source:USDurl}}}}' | base64 | tr -d '\n'` Create the deployment by using the az CLI: USD az deployment group create --verbose -g USD{RESOURCE_GROUP} \ --template-file "<installation_directory>/04_bootstrap.json" \ --parameters bootstrapIgnition="USD{BOOTSTRAP_IGNITION}" \ 1 --parameters baseName="USD{INFRA_ID}" \ 2 --parameters diagnosticsStorageAccountName="USD{CLUSTER_NAME}sa" 3 1 The bootstrap Ignition content for the bootstrap cluster. 2 The base name to be used in resource names; this is usually the cluster's infrastructure ID. 3 The name of the storage account for your cluster. 7.5.14.1. ARM template for the bootstrap machine You can use the following Azure Resource Manager (ARM) template to deploy the bootstrap machine that you need for your OpenShift Container Platform cluster: Example 7.4. 04_bootstrap.json ARM template link:https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/azurestack/04_bootstrap.json[] 7.5.15. Creating the control plane machines in Azure Stack Hub You must create the control plane machines in Microsoft Azure Stack Hub for your cluster to use. One way to create these machines is to modify the provided Azure Resource Manager (ARM) template. If you do not use the provided ARM template to create your control plane machines, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, consider contacting Red Hat support with your installation logs. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Create and configure a VNet and associated subnets in Azure Stack Hub. Create and configure networking and load balancers in Azure Stack Hub. Create control plane and compute roles. Create the bootstrap machine. Procedure Copy the template from the ARM template for control plane machines section of this topic and save it as 05_masters.json in your cluster's installation directory. This template describes the control plane machines that your cluster requires. Export the following variable needed by the control plane machine deployment: USD export MASTER_IGNITION=`cat <installation_directory>/master.ign | base64 | tr -d '\n'` Create the deployment by using the az CLI: USD az deployment group create -g USD{RESOURCE_GROUP} \ --template-file "<installation_directory>/05_masters.json" \ --parameters masterIgnition="USD{MASTER_IGNITION}" \ 1 --parameters baseName="USD{INFRA_ID}" \ 2 --parameters diagnosticsStorageAccountName="USD{CLUSTER_NAME}sa" 3 1 The Ignition content for the control plane nodes (also known as the master nodes). 2 The base name to be used in resource names; this is usually the cluster's infrastructure ID. 3 The name of the storage account for your cluster. 7.5.15.1. ARM template for control plane machines You can use the following Azure Resource Manager (ARM) template to deploy the control plane machines that you need for your OpenShift Container Platform cluster: Example 7.5. 05_masters.json ARM template link:https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/azurestack/05_masters.json[] 7.5.16. Wait for bootstrap completion and remove bootstrap resources in Azure Stack Hub After you create all of the required infrastructure in Microsoft Azure Stack Hub, wait for the bootstrap process to complete on the machines that you provisioned by using the Ignition config files that you generated with the installation program. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Create and configure a VNet and associated subnets in Azure Stack Hub. Create and configure networking and load balancers in Azure Stack Hub. Create control plane and compute roles. Create the bootstrap machine. Create the control plane machines. Procedure Change to the directory that contains the installation program and run the following command: USD ./openshift-install wait-for bootstrap-complete --dir <installation_directory> \ 1 --log-level info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . If the command exits without a FATAL warning, your production control plane has initialized. Delete the bootstrap resources: USD az network nsg rule delete -g USD{RESOURCE_GROUP} --nsg-name USD{INFRA_ID}-nsg --name bootstrap_ssh_in USD az vm stop -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap USD az vm deallocate -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap USD az vm delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap --yes USD az disk delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap_OSDisk --no-wait --yes USD az network nic delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap-nic --no-wait USD az storage blob delete --account-key USD{ACCOUNT_KEY} --account-name USD{CLUSTER_NAME}sa --container-name files --name bootstrap.ign USD az network public-ip delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap-ssh-pip Note If you do not delete the bootstrap server, installation may not succeed due to API traffic being routed to the bootstrap server. 7.5.17. Creating additional worker machines in Azure Stack Hub You can create worker machines in Microsoft Azure Stack Hub for your cluster to use by launching individual instances discretely or by automated processes outside the cluster, such as auto scaling groups. You can also take advantage of the built-in cluster scaling mechanisms and the machine API in OpenShift Container Platform. In this example, you manually launch one instance by using the Azure Resource Manager (ARM) template. Additional instances can be launched by including additional resources of type 06_workers.json in the file. If you do not use the provided ARM template to create your control plane machines, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, consider contacting Red Hat support with your installation logs. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Create and configure a VNet and associated subnets in Azure Stack Hub. Create and configure networking and load balancers in Azure Stack Hub. Create control plane and compute roles. Create the bootstrap machine. Create the control plane machines. Procedure Copy the template from the ARM template for worker machines section of this topic and save it as 06_workers.json in your cluster's installation directory. This template describes the worker machines that your cluster requires. Export the following variable needed by the worker machine deployment: USD export WORKER_IGNITION=`cat <installation_directory>/worker.ign | base64 | tr -d '\n'` Create the deployment by using the az CLI: USD az deployment group create -g USD{RESOURCE_GROUP} \ --template-file "<installation_directory>/06_workers.json" \ --parameters workerIgnition="USD{WORKER_IGNITION}" \ 1 --parameters baseName="USD{INFRA_ID}" 2 --parameters diagnosticsStorageAccountName="USD{CLUSTER_NAME}sa" 3 1 The Ignition content for the worker nodes. 2 The base name to be used in resource names; this is usually the cluster's infrastructure ID. 3 The name of the storage account for your cluster. 7.5.17.1. ARM template for worker machines You can use the following Azure Resource Manager (ARM) template to deploy the worker machines that you need for your OpenShift Container Platform cluster: Example 7.6. 06_workers.json ARM template link:https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/azurestack/06_workers.json[] 7.5.18. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.10. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.10 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.10 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.10 MacOSX Client entry and save the file. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 7.5.19. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 7.5.20. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.23.0 master-1 Ready master 63m v1.23.0 master-2 Ready master 64m v1.23.0 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.23.0 master-1 Ready master 73m v1.23.0 master-2 Ready master 74m v1.23.0 worker-0 Ready worker 11m v1.23.0 worker-1 Ready worker 11m v1.23.0 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 7.5.21. Adding the Ingress DNS records If you removed the DNS Zone configuration when creating Kubernetes manifests and generating Ignition configs, you must manually create DNS records that point at the Ingress load balancer. You can create either a wildcard *.apps.{baseDomain}. or specific records. You can use A, CNAME, and other records per your requirements. Prerequisites You deployed an OpenShift Container Platform cluster on Microsoft Azure Stack Hub by using infrastructure that you provisioned. Install the OpenShift CLI ( oc ). Install or update the Azure CLI . Procedure Confirm the Ingress router has created a load balancer and populated the EXTERNAL-IP field: USD oc -n openshift-ingress get service router-default Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.20.10 35.130.120.110 80:32288/TCP,443:31215/TCP 20 Export the Ingress router IP as a variable: USD export PUBLIC_IP_ROUTER=`oc -n openshift-ingress get service router-default --no-headers | awk '{print USD4}'` Add a *.apps record to the DNS zone. If you are adding this cluster to a new DNS zone, run: USD az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n *.apps -a USD{PUBLIC_IP_ROUTER} --ttl 300 If you are adding this cluster to an already existing DNS zone, run: USD az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{BASE_DOMAIN} -n *.apps.USD{CLUSTER_NAME} -a USD{PUBLIC_IP_ROUTER} --ttl 300 If you prefer to add explicit domains instead of using a wildcard, you can create entries for each of the cluster's current routes: USD oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{"\n"}{end}{end}' routes Example output oauth-openshift.apps.cluster.basedomain.com console-openshift-console.apps.cluster.basedomain.com downloads-openshift-console.apps.cluster.basedomain.com alertmanager-main-openshift-monitoring.apps.cluster.basedomain.com grafana-openshift-monitoring.apps.cluster.basedomain.com prometheus-k8s-openshift-monitoring.apps.cluster.basedomain.com 7.5.22. Completing an Azure Stack Hub installation on user-provisioned infrastructure After you start the OpenShift Container Platform installation on Microsoft Azure Stack Hub user-provisioned infrastructure, you can monitor the cluster events until the cluster is ready. Prerequisites Deploy the bootstrap machine for an OpenShift Container Platform cluster on user-provisioned Azure Stack Hub infrastructure. Install the oc CLI and log in. Procedure Complete the cluster installation: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 Example output INFO Waiting up to 30m0s for the cluster to initialize... 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Additional resources See About remote health monitoring for more information about the Telemetry service. 7.6. Uninstalling a cluster on Azure Stack Hub You can remove a cluster that you deployed to Azure Stack Hub. 7.6.1. Removing a cluster that uses installer-provisioned infrastructure You can remove a cluster that uses installer-provisioned infrastructure from your cloud. Note After uninstallation, check your cloud provider for any resources not removed properly, especially with User Provisioned Infrastructure (UPI) clusters. There might be resources that the installer did not create or that the installer is unable to access. Prerequisites Have a copy of the installation program that you used to deploy the cluster. Have the files that the installation program generated when you created your cluster. While you can uninstall the cluster using the copy of the installation program that was used to deploy it, using OpenShift Container Platform version 4.13 or later is recommended. The removal of service principals is dependent on the Microsoft Azure AD Graph API. Using version 4.13 or later of the installation program ensures that service principals are removed without the need for manual intervention, if and when Microsoft decides to retire the Azure AD Graph API. Procedure From the directory that contains the installation program on the computer that you used to install the cluster, run the following command: USD ./openshift-install destroy cluster \ --dir <installation_directory> --log-level info 1 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different details, specify warn , debug , or error instead of info . Note You must specify the directory that contains the cluster definition files for your cluster. The installation program requires the metadata.json file in this directory to delete the cluster. Optional: Delete the <installation_directory> directory and the OpenShift Container Platform installation program.
|
[
"az cloud register -n AzureStackCloud --endpoint-resource-manager <endpoint> 1",
"az cloud set -n AzureStackCloud",
"az cloud update --profile 2019-03-01-hybrid",
"az login",
"az account list --refresh",
"[ { \"cloudName\": AzureStackCloud\", \"id\": \"9bab1460-96d5-40b3-a78e-17b15e978a80\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } } ]",
"az account show",
"{ \"environmentName\": AzureStackCloud\", \"id\": \"9bab1460-96d5-40b3-a78e-17b15e978a80\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee\", 1 \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }",
"az account set -s <subscription_id> 1",
"az account show",
"{ \"environmentName\": AzureStackCloud\", \"id\": \"33212d16-bdf6-45cb-b038-f6565b61edda\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"8049c7e9-c3de-762d-a54e-dc3f6be6a7ee\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }",
"az ad sp create-for-rbac --role Contributor --name <service_principal> \\ 1 --scopes /subscriptions/<subscription_id> 2 --years <years> 3",
"Creating 'Contributor' role assignment under scope '/subscriptions/<subscription_id>' The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see https://aka.ms/azadsp-cli { \"appId\": \"ac461d78-bf4b-4387-ad16-7e32e328aec6\", \"displayName\": <service_principal>\", \"password\": \"00000000-0000-0000-0000-000000000000\", \"tenantId\": \"8049c7e9-c3de-762d-a54e-dc3f6be6a7ee\" }",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"export COMPRESSED_VHD_URL=USD(openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.artifacts.azurestack.formats.\"vhd.gz\".disk.location')",
"curl -O -L USD{COMPRESSED_VHD_URL}",
"tar -xvf openshift-install-linux.tar.gz",
"mkdir <installation_directory>",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"sshKey: <key1> <key2> <key3>",
"apiVersion: v1 baseDomain: example.com 1 credentialsMode: Manual controlPlane: 2 3 name: master platform: azure: osDisk: diskSizeGB: 1024 4 diskType: premium_LRS replicas: 3 compute: 5 - name: worker platform: azure: osDisk: diskSizeGB: 512 6 diskType: premium_LRS replicas: 3 metadata: name: test-cluster 7 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: azure: armEndpoint: azurestack_arm_endpoint 9 10 baseDomainResourceGroupName: resource_group 11 12 region: azure_stack_local_region 13 14 resourceGroupName: existing_resource_group 15 outboundType: Loadbalancer cloudName: AzureStackCloud 16 clusterOSimage: https://vhdsa.blob.example.example.com/vhd/rhcos-410.84.202112040202-0-azurestack.x86_64.vhd 17 18 pullSecret: '{\"auths\": ...}' 19 20 fips: false 21 sshKey: ssh-ed25519 AAAA... 22 additionalTrustBundle: | 23 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----",
"openshift-install create manifests --dir <installation_directory>",
"openshift-install version",
"release image quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64",
"oc adm release extract quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 --credentials-requests --cloud=azure",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component-credentials-request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component-credentials-request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor secretRef: name: <component-secret> namespace: <component-namespace>",
"apiVersion: v1 kind: Secret metadata: name: <component-secret> namespace: <component-namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region>",
"grep \"release.openshift.io/feature-gate\" *",
"0000_30_capi-operator_00_credentials-request.yaml: release.openshift.io/feature-gate: TechPreviewNoUpgrade",
"apiVersion: config.openshift.io/v1 kind: Proxy metadata: creationTimestamp: null name: cluster spec: trustedCA: name: user-ca-bundle status: {}",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"cat <installation_directory>/auth/kubeadmin-password",
"oc get routes -n openshift-console | grep 'console-openshift'",
"console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"export COMPRESSED_VHD_URL=USD(openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.artifacts.azurestack.formats.\"vhd.gz\".disk.location')",
"curl -O -L USD{COMPRESSED_VHD_URL}",
"tar -xvf openshift-install-linux.tar.gz",
"mkdir <installation_directory>",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"sshKey: <key1> <key2> <key3>",
"apiVersion: v1 baseDomain: example.com 1 credentialsMode: Manual controlPlane: 2 3 name: master platform: azure: osDisk: diskSizeGB: 1024 4 diskType: premium_LRS replicas: 3 compute: 5 - name: worker platform: azure: osDisk: diskSizeGB: 512 6 diskType: premium_LRS replicas: 3 metadata: name: test-cluster 7 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: azure: armEndpoint: azurestack_arm_endpoint 9 10 baseDomainResourceGroupName: resource_group 11 12 region: azure_stack_local_region 13 14 resourceGroupName: existing_resource_group 15 outboundType: Loadbalancer cloudName: AzureStackCloud 16 clusterOSimage: https://vhdsa.blob.example.example.com/vhd/rhcos-410.84.202112040202-0-azurestack.x86_64.vhd 17 18 pullSecret: '{\"auths\": ...}' 19 20 fips: false 21 sshKey: ssh-ed25519 AAAA... 22 additionalTrustBundle: | 23 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----",
"openshift-install create manifests --dir <installation_directory>",
"openshift-install version",
"release image quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64",
"oc adm release extract quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 --credentials-requests --cloud=azure",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component-credentials-request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component-credentials-request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor secretRef: name: <component-secret> namespace: <component-namespace>",
"apiVersion: v1 kind: Secret metadata: name: <component-secret> namespace: <component-namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region>",
"grep \"release.openshift.io/feature-gate\" *",
"0000_30_capi-operator_00_credentials-request.yaml: release.openshift.io/feature-gate: TechPreviewNoUpgrade",
"apiVersion: config.openshift.io/v1 kind: Proxy metadata: creationTimestamp: null name: cluster spec: trustedCA: name: user-ca-bundle status: {}",
"./openshift-install create manifests --dir <installation_directory> 1",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec:",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: vxlanPort: 4800",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: {}",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789",
"While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes.",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {}",
"kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s",
"./openshift-install create manifests --dir <installation_directory>",
"cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: EOF",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: hybridOverlayConfig: hybridClusterNetwork: 1 - cidr: 10.132.0.0/14 hostPrefix: 23 hybridOverlayVXLANPort: 9898 2",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"cat <installation_directory>/auth/kubeadmin-password",
"oc get routes -n openshift-console | grep 'console-openshift'",
"console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None",
"az cloud register -n AzureStackCloud --endpoint-resource-manager <endpoint> 1",
"az cloud set -n AzureStackCloud",
"az cloud update --profile 2019-03-01-hybrid",
"az login",
"az account list --refresh",
"[ { \"cloudName\": AzureStackCloud\", \"id\": \"9bab1460-96d5-40b3-a78e-17b15e978a80\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } } ]",
"az account show",
"{ \"environmentName\": AzureStackCloud\", \"id\": \"9bab1460-96d5-40b3-a78e-17b15e978a80\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee\", 1 \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }",
"az account set -s <subscription_id> 1",
"az account show",
"{ \"environmentName\": AzureStackCloud\", \"id\": \"33212d16-bdf6-45cb-b038-f6565b61edda\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"8049c7e9-c3de-762d-a54e-dc3f6be6a7ee\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }",
"az ad sp create-for-rbac --role Contributor --name <service_principal> \\ 1 --scopes /subscriptions/<subscription_id> 2 --years <years> 3",
"Creating 'Contributor' role assignment under scope '/subscriptions/<subscription_id>' The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see https://aka.ms/azadsp-cli { \"appId\": \"ac461d78-bf4b-4387-ad16-7e32e328aec6\", \"displayName\": <service_principal>\", \"password\": \"00000000-0000-0000-0000-000000000000\", \"tenantId\": \"8049c7e9-c3de-762d-a54e-dc3f6be6a7ee\" }",
"tar -xvf openshift-install-linux.tar.gz",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"mkdir <installation_directory>",
"compute: - hyperthreading: Enabled name: worker platform: {} replicas: 0 1",
"platform: azure: armEndpoint: <azurestack_arm_endpoint> 1 baseDomainResourceGroupName: <resource_group> 2 cloudName: AzureStackCloud 3 region: <azurestack_region> 4",
"apiVersion: v1 baseDomain: example.com controlPlane: 1 name: master platform: azure: osDisk: diskSizeGB: 1024 2 diskType: premium_LRS replicas: 3 compute: 3 - name: worker platform: azure: osDisk: diskSizeGB: 512 4 diskType: premium_LRS replicas: 0 metadata: name: test-cluster 5 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: azure: armEndpoint: azurestack_arm_endpoint 6 baseDomainResourceGroupName: resource_group 7 region: azure_stack_local_region 8 resourceGroupName: existing_resource_group 9 outboundType: Loadbalancer cloudName: AzureStackCloud 10 pullSecret: '{\"auths\": ...}' 11 fips: false 12 additionalTrustBundle: | 13 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- sshKey: ssh-ed25519 AAAA... 14",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----",
"./openshift-install wait-for install-complete --log-level debug",
"export CLUSTER_NAME=<cluster_name> 1 export AZURE_REGION=<azure_region> 2 export SSH_KEY=<ssh_key> 3 export BASE_DOMAIN=<base_domain> 4 export BASE_DOMAIN_RESOURCE_GROUP=<base_domain_resource_group> 5",
"export CLUSTER_NAME=test-cluster export AZURE_REGION=centralus export SSH_KEY=\"ssh-rsa xxx/xxx/xxx= [email protected]\" export BASE_DOMAIN=example.com export BASE_DOMAIN_RESOURCE_GROUP=ocp-cluster",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"./openshift-install create manifests --dir <installation_directory> 1",
"rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml",
"rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml",
"apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {}",
"spec: trustedCA: name: user-ca-bundle",
"export INFRA_ID=<infra_id> 1",
"export RESOURCE_GROUP=<resource_group> 1",
"openshift-install version",
"release image quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64",
"oc adm release extract quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 --credentials-requests --cloud=azure",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: \"1.0\" name: openshift-image-registry-azure namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor",
"apiVersion: v1 kind: Secret metadata: name: USD{secret_name} namespace: USD{secret_namespace} stringData: azure_subscription_id: USD{subscription_id} azure_client_id: USD{app_id} azure_client_secret: USD{client_secret} azure_tenant_id: USD{tenant_id} azure_resource_prefix: USD{cluster_name} azure_resourcegroup: USD{resource_group} azure_region: USD{azure_region}",
"grep \"release.openshift.io/feature-gate\" *",
"0000_30_capi-operator_00_credentials-request.yaml: release.openshift.io/feature-gate: TechPreviewNoUpgrade",
"apiVersion: v1 kind: ConfigMap metadata: name: cloud-credential-operator-config namespace: openshift-cloud-credential-operator annotations: release.openshift.io/create-only: \"true\" data: disabled: \"true\"",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"mkdir USDHOME/clusterconfig",
"openshift-install create manifests --dir USDHOME/clusterconfig",
"? SSH Public Key INFO Credentials loaded from the \"myprofile\" profile in file \"/home/myuser/.aws/credentials\" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift",
"ls USDHOME/clusterconfig/openshift/",
"99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml",
"variant: openshift version: 4.10.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true",
"butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml",
"openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign",
"az group create --name USD{RESOURCE_GROUP} --location USD{AZURE_REGION}",
"az storage account create -g USD{RESOURCE_GROUP} --location USD{AZURE_REGION} --name USD{CLUSTER_NAME}sa --kind Storage --sku Standard_LRS",
"export ACCOUNT_KEY=`az storage account keys list -g USD{RESOURCE_GROUP} --account-name USD{CLUSTER_NAME}sa --query \"[0].value\" -o tsv`",
"export COMPRESSED_VHD_URL=USD(openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.artifacts.azurestack.formats.\"vhd.gz\".disk.location')",
"az storage container create --name vhd --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY}",
"curl -O -L USD{COMPRESSED_VHD_URL}",
"az storage blob upload --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c vhd -n \"rhcos.vhd\" -f rhcos-<rhcos_version>-azurestack.x86_64.vhd",
"az storage container create --name files --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY}",
"az storage blob upload --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c \"files\" -f \"<installation_directory>/bootstrap.ign\" -n \"bootstrap.ign\"",
"az network dns zone create -g USD{BASE_DOMAIN_RESOURCE_GROUP} -n USD{CLUSTER_NAME}.USD{BASE_DOMAIN}",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/01_vnet.json\" --parameters baseName=\"USD{INFRA_ID}\" 1",
"link:https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/azurestack/01_vnet.json[]",
"export VHD_BLOB_URL=`az storage blob url --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c vhd -n \"rhcos.vhd\" -o tsv`",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/02_storage.json\" --parameters vhdBlobURL=\"USD{VHD_BLOB_URL}\" \\ 1 --parameters baseName=\"USD{INFRA_ID}\" 2",
"link:https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/azurestack/02_storage.json[]",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/03_infra.json\" --parameters baseName=\"USD{INFRA_ID}\" 1",
"export PUBLIC_IP=`az network public-ip list -g USD{RESOURCE_GROUP} --query \"[?name=='USD{INFRA_ID}-master-pip'] | [0].ipAddress\" -o tsv`",
"export PRIVATE_IP=`az network lb frontend-ip show -g \"USDRESOURCE_GROUP\" --lb-name \"USD{INFRA_ID}-internal\" -n internal-lb-ip --query \"privateIpAddress\" -o tsv`",
"az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n api -a USD{PUBLIC_IP} --ttl 60",
"az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{BASE_DOMAIN} -n api.USD{CLUSTER_NAME} -a USD{PUBLIC_IP} --ttl 60",
"az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z \"USD{CLUSTER_NAME}.USD{BASE_DOMAIN}\" -n api-int -a USD{PRIVATE_IP} --ttl 60",
"az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{BASE_DOMAIN} -n api-int.USD{CLUSTER_NAME} -a USD{PRIVATE_IP} --ttl 60",
"link:https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/azurestack/03_infra.json[]",
"bootstrap_url_expiry=`date -u -d \"10 hours\" '+%Y-%m-%dT%H:%MZ'`",
"export BOOTSTRAP_URL=`az storage blob generate-sas -c 'files' -n 'bootstrap.ign' --https-only --full-uri --permissions r --expiry USDbootstrap_url_expiry --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -o tsv`",
"export BOOTSTRAP_IGNITION=`jq -rcnM --arg v \"3.2.0\" --arg url USD{BOOTSTRAP_URL} '{ignition:{version:USDv,config:{replace:{source:USDurl}}}}' | base64 | tr -d '\\n'`",
"export CA=\"data:text/plain;charset=utf-8;base64,USD(cat CA.pem |base64 |tr -d '\\n')\"",
"export BOOTSTRAP_IGNITION=`jq -rcnM --arg v \"3.2.0\" --arg url \"USDBOOTSTRAP_URL\" --arg cert \"USDCA\" '{ignition:{version:USDv,security:{tls:{certificateAuthorities:[{source:USDcert}]}},config:{replace:{source:USDurl}}}}' | base64 | tr -d '\\n'`",
"az deployment group create --verbose -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/04_bootstrap.json\" --parameters bootstrapIgnition=\"USD{BOOTSTRAP_IGNITION}\" \\ 1 --parameters baseName=\"USD{INFRA_ID}\" \\ 2 --parameters diagnosticsStorageAccountName=\"USD{CLUSTER_NAME}sa\" 3",
"link:https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/azurestack/04_bootstrap.json[]",
"export MASTER_IGNITION=`cat <installation_directory>/master.ign | base64 | tr -d '\\n'`",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/05_masters.json\" --parameters masterIgnition=\"USD{MASTER_IGNITION}\" \\ 1 --parameters baseName=\"USD{INFRA_ID}\" \\ 2 --parameters diagnosticsStorageAccountName=\"USD{CLUSTER_NAME}sa\" 3",
"link:https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/azurestack/05_masters.json[]",
"./openshift-install wait-for bootstrap-complete --dir <installation_directory> \\ 1 --log-level info 2",
"az network nsg rule delete -g USD{RESOURCE_GROUP} --nsg-name USD{INFRA_ID}-nsg --name bootstrap_ssh_in az vm stop -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap az vm deallocate -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap az vm delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap --yes az disk delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap_OSDisk --no-wait --yes az network nic delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap-nic --no-wait az storage blob delete --account-key USD{ACCOUNT_KEY} --account-name USD{CLUSTER_NAME}sa --container-name files --name bootstrap.ign az network public-ip delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap-ssh-pip",
"export WORKER_IGNITION=`cat <installation_directory>/worker.ign | base64 | tr -d '\\n'`",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/06_workers.json\" --parameters workerIgnition=\"USD{WORKER_IGNITION}\" \\ 1 --parameters baseName=\"USD{INFRA_ID}\" 2 --parameters diagnosticsStorageAccountName=\"USD{CLUSTER_NAME}sa\" 3",
"link:https://raw.githubusercontent.com/openshift/installer/release-4.10/upi/azurestack/06_workers.json[]",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.23.0 master-1 Ready master 63m v1.23.0 master-2 Ready master 64m v1.23.0",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.23.0 master-1 Ready master 73m v1.23.0 master-2 Ready master 74m v1.23.0 worker-0 Ready worker 11m v1.23.0 worker-1 Ready worker 11m v1.23.0",
"oc -n openshift-ingress get service router-default",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.20.10 35.130.120.110 80:32288/TCP,443:31215/TCP 20",
"export PUBLIC_IP_ROUTER=`oc -n openshift-ingress get service router-default --no-headers | awk '{print USD4}'`",
"az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n *.apps -a USD{PUBLIC_IP_ROUTER} --ttl 300",
"az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{BASE_DOMAIN} -n *.apps.USD{CLUSTER_NAME} -a USD{PUBLIC_IP_ROUTER} --ttl 300",
"oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{\"\\n\"}{end}{end}' routes",
"oauth-openshift.apps.cluster.basedomain.com console-openshift-console.apps.cluster.basedomain.com downloads-openshift-console.apps.cluster.basedomain.com alertmanager-main-openshift-monitoring.apps.cluster.basedomain.com grafana-openshift-monitoring.apps.cluster.basedomain.com prometheus-k8s-openshift-monitoring.apps.cluster.basedomain.com",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/installing/installing-on-azure-stack-hub
|
Chapter 27. JvmOptions schema reference
|
Chapter 27. JvmOptions schema reference Used in: CruiseControlSpec , EntityTopicOperatorSpec , EntityUserOperatorSpec , KafkaBridgeSpec , KafkaClusterSpec , KafkaConnectSpec , KafkaMirrorMaker2Spec , KafkaMirrorMakerSpec , KafkaNodePoolSpec , ZookeeperClusterSpec Property Property type Description -XX map A map of -XX options to the JVM. -Xms string -Xms option to to the JVM. -Xmx string -Xmx option to to the JVM. gcLoggingEnabled boolean Specifies whether the Garbage Collection logging is enabled. The default is false. javaSystemProperties SystemProperty array A map of additional system properties which will be passed using the -D option to the JVM.
| null |
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-jvmoptions-reference
|
Chapter 10. Setting up graphical representation of PCP metrics
|
Chapter 10. Setting up graphical representation of PCP metrics Using a combination of pcp , grafana , pcp redis , pcp bpftrace , and pcp vector provides graphical representation of the live data or data collected by Performance Co-Pilot (PCP). 10.1. Setting up PCP with pcp-zeroconf This procedure describes how to set up PCP on a system with the pcp-zeroconf package. Once the pcp-zeroconf package is installed, the system records the default set of metrics into archived files. Procedure Install the pcp-zeroconf package: Verification Ensure that the pmlogger service is active, and starts archiving the metrics: Additional resources pmlogger man page on your system Monitoring performance with Performance Co-Pilot 10.2. Setting up a grafana-server Grafana generates graphs that are accessible from a browser. The grafana-server is a back-end server for the Grafana dashboard. It listens, by default, on all interfaces, and provides web services accessed through the web browser. The grafana-pcp plugin interacts with the pmproxy daemon in the backend. This procedure describes how to set up a grafana-server . Prerequisites PCP is configured. For more information, see Setting up PCP with pcp-zeroconf . Procedure Install the following packages: Restart and enable the following service: Open the server's firewall for network traffic to the Grafana service. Verification Ensure that the grafana-server is listening and responding to requests: Ensure that the grafana-pcp plugin is installed: Additional resources pmproxy(1) and grafana-server man pages on your system 10.3. Accessing the Grafana web UI This procedure describes how to access the Grafana web interface. Using the Grafana web interface, you can: add PCP Redis, PCP bpftrace, and PCP Vector data sources create dashboard view an overview of any useful metrics create alerts in PCP Redis Prerequisites PCP is configured. For more information, see Setting up PCP with pcp-zeroconf . The grafana-server is configured. For more information, see Setting up a grafana-server . Procedure On the client system, open a browser and access the grafana-server on port 3000 , using http://192.0.2.0 :3000 link. Replace 192.0.2.0 with your machine IP when accessing Grafana web UI from a remote machine, or with localhost when accessing the web UI locally. For the first login, enter admin in both the Email or username and Password field. Grafana prompts to set a New password to create a secured account. If you want to set it later, click Skip . From the menu, hover over the Configuration icon and then click Plugins . In the Plugins tab, type performance co-pilot in the Search by name or type text box and then click Performance Co-Pilot (PCP) plugin. In the Plugins / Performance Co-Pilot pane, click Enable . Click Grafana icon. The Grafana Home page is displayed. Figure 10.1. Home Dashboard Note The top corner of the screen has a similar icon, but it controls the general Dashboard settings . In the Grafana Home page, click Add your first data source to add PCP Redis, PCP bpftrace, and PCP Vector data sources. For more information about adding data source, see: To add pcp redis data source, view default dashboard, create a panel, and an alert rule, see Creating panels and alert in PCP Redis data source . To add pcp bpftrace data source and view the default dashboard, see Viewing the PCP bpftrace System Analysis dashboard . To add pcp vector data source, view the default dashboard, and to view the vector checklist, see Viewing the PCP Vector Checklist . Optional: From the menu, hover over the admin profile icon to change the Preferences including Edit Profile , Change Password , or to Sign out . Additional resources grafana-cli and grafana-server man pages on your system 10.4. Configuring PCP Redis Use the PCP Redis data source to: View data archives Query time series using pmseries language Analyze data across multiple hosts Prerequisites PCP is configured. For more information, see Setting up PCP with pcp-zeroconf . The grafana-server is configured. For more information, see Setting up a grafana-server . Mail transfer agent, for example, sendmail or postfix is installed and configured. Procedure Install the redis package: Note From Red Hat Enterprise Linux 8.4, Redis 6 is supported but the yum update command does not update Redis 5 to Redis 6. To update from Redis 5 to Redis 6, run: # yum module switch-to redis:6 Start and enable the following services: Restart the grafana-server : Verification Ensure that the pmproxy and redis are working: This command does not return any data if the redis package is not installed. Additional resources pmseries(1) man page on your system 10.5. Creating panels and alerts in PCP Redis data source After adding the PCP Redis data source, you can view the dashboard with an overview of useful metrics, add a query to visualize the load graph, and create alerts that help you to view the system issues after they occur. Prerequisites The PCP Redis is configured. For more information, see Configuring PCP Redis . The grafana-server is accessible. For more information, see Accessing the Grafana web UI . Procedure Log into the Grafana web UI. In the Grafana Home page, click Add your first data source . In the Add data source pane, type redis in the Filter by name or type text box and then click PCP Redis . In the Data Sources / PCP Redis pane, perform the following: Add http://localhost:44322 in the URL field and then click Save & Test . Click Dashboards tab Import PCP Redis: Host Overview to see a dashboard with an overview of any useful metrics. Figure 10.2. PCP Redis: Host Overview Add a new panel: From the menu, hover over the Create icon Dashboard Add new panel icon to add a panel. In the Query tab, select the PCP Redis from the query list instead of the selected default option and in the text field of A , enter metric, for example, kernel.all.load to visualize the kernel load graph. Optional: Add Panel title and Description , and update other options from the Settings . Click Save to apply changes and save the dashboard. Add Dashboard name . Click Apply to apply changes and go back to the dashboard. Figure 10.3. PCP Redis query panel Create an alert rule: In the PCP Redis query panel , click Alert and then click Create Alert . Edit the Name , Evaluate query , and For fields from the Rule , and specify the Conditions for your alert. Click Save to apply changes and save the dashboard. Click Apply to apply changes and go back to the dashboard. Figure 10.4. Creating alerts in the PCP Redis panel Optional: In the same panel, scroll down and click Delete icon to delete the created rule. Optional: From the menu, click Alerting icon to view the created alert rules with different alert statuses, to edit the alert rule, or to pause the existing rule from the Alert Rules tab. To add a notification channel for the created alert rule to receive an alert notification from Grafana, see Adding notification channels for alerts . 10.6. Adding notification channels for alerts By adding notification channels, you can receive an alert notification from Grafana whenever the alert rule conditions are met and the system needs further monitoring. You can receive these alerts after selecting any one type from the supported list of notifiers, which includes DingDing , Discord , Email , Google Hangouts Chat , HipChat , Kafka REST Proxy , LINE , Microsoft Teams , OpsGenie , PagerDuty , Prometheus Alertmanager , Pushover , Sensu , Slack , Telegram , Threema Gateway , VictorOps , and webhook . Prerequisites The grafana-server is accessible. For more information, see Accessing the Grafana web UI . An alert rule is created. For more information, see Creating panels and alert in PCP Redis data source . Configure SMTP and add a valid sender's email address in the grafana/grafana.ini file: Replace [email protected] by a valid email address. Restart grafana-server Procedure From the menu, hover over the Alerting icon click Notification channels Add channel . In the Add notification channel details pane, perform the following: Enter your name in the Name text box Select the communication Type , for example, Email and enter the email address. You can add multiple email addresses using the ; separator. Optional: Configure Optional Email settings and Notification settings . Click Save . Select a notification channel in the alert rule: From the menu, hover over the Alerting icon and then click Alert rules . From the Alert Rules tab, click the created alert rule. On the Notifications tab, select your notification channel name from the Send to option, and then add an alert message. Click Apply . Additional resources Upstream Grafana documentation for alert notifications 10.7. Setting up authentication between PCP components You can setup authentication using the scram-sha-256 authentication mechanism, which is supported by PCP through the Simple Authentication Security Layer (SASL) framework. Note From Red Hat Enterprise Linux 8.3, PCP supports the scram-sha-256 authentication mechanism. Procedure Install the sasl framework for the scram-sha-256 authentication mechanism: Specify the supported authentication mechanism and the user database path in the pmcd.conf file: Create a new user: Replace metrics by your user name. Add the created user in the user database: To add the created user, you are required to enter the metrics account password. Set the permissions of the user database: Restart the pmcd service: Verification Verify the sasl configuration: Additional resources saslauthd(8) , pminfo(1) , and sha256 man pages on your system How can I setup authentication between PCP components, like PMDAs and pmcd in RHEL 8.2? (Red Hat Knowledgebase) 10.8. Installing PCP bpftrace Install the PCP bpftrace agent to introspect a system and to gather metrics from the kernel and user-space tracepoints. The bpftrace agent uses bpftrace scripts to gather the metrics. The bpftrace scripts use the enhanced Berkeley Packet Filter ( eBPF ). This procedure describes how to install a pcp bpftrace . Prerequisites PCP is configured. For more information, see Setting up PCP with pcp-zeroconf . The grafana-server is configured. For more information, see Setting up a grafana-server . The scram-sha-256 authentication mechanism is configured. For more information, see Setting up authentication between PCP components . Procedure Install the pcp-pmda-bpftrace package: Edit the bpftrace.conf file and add the user that you have created in Setting up authentication between PCP components : Replace metrics by your user name. Install bpftrace PMDA: The pmda-bpftrace is now installed, and can only be used after authenticating your user. For more information, see Viewing the PCP bpftrace System Analysis dashboard . Additional resources pmdabpftrace(1) and bpftrace man pages on your system 10.9. Viewing the PCP bpftrace System Analysis dashboard Using the PCP bpftrace data source, you can access the live data from sources which are not available as normal data from the pmlogger or archives In the PCP bpftrace data source, you can view the dashboard with an overview of useful metrics. Prerequisites The PCP bpftrace is installed. For more information, see Installing PCP bpftrace . The grafana-server is accessible. For more information, see Accessing the Grafana web UI . Procedure Log into the Grafana web UI. In the Grafana Home page, click Add your first data source . In the Add data source pane, type bpftrace in the Filter by name or type text box and then click PCP bpftrace . In the Data Sources / PCP bpftrace pane, perform the following: Add http://localhost:44322 in the URL field. Toggle the Basic Auth option and add the created user credentials in the User and Password field. Click Save & Test . Figure 10.5. Adding PCP bpftrace in the data source Click Dashboards tab Import PCP bpftrace: System Analysis to see a dashboard with an overview of any useful metrics. Figure 10.6. PCP bpftrace: System Analysis 10.10. Installing PCP Vector This procedure describes how to install a pcp vector . Prerequisites PCP is configured. For more information, see Setting up PCP with pcp-zeroconf . The grafana-server is configured. For more information, see Setting up a grafana-server . Procedure Install the bcc PMDA: Additional resources pmdabcc(1) man page on your system 10.11. Viewing the PCP Vector Checklist The PCP Vector data source displays live metrics and uses the pcp metrics. It analyzes data for individual hosts. After adding the PCP Vector data source, you can view the dashboard with an overview of useful metrics and view the related troubleshooting or reference links in the checklist. Prerequisites The PCP Vector is installed. For more information, see Installing PCP Vector . The grafana-server is accessible. For more information, see Accessing the Grafana web UI . Procedure Log into the Grafana web UI. In the Grafana Home page, click Add your first data source . In the Add data source pane, type vector in the Filter by name or type text box and then click PCP Vector . In the Data Sources / PCP Vector pane, perform the following: Add http://localhost:44322 in the URL field and then click Save & Test . Click Dashboards tab Import PCP Vector: Host Overview to see a dashboard with an overview of any useful metrics. Figure 10.7. PCP Vector: Host Overview From the menu, hover over the Performance Co-Pilot plugin and then click PCP Vector Checklist . In the PCP checklist, click help or warning icon to view the related troubleshooting or reference links. Figure 10.8. Performance Co-Pilot / PCP Vector Checklist 10.12. Using heatmaps in Grafana You can use heatmaps in Grafana to view histograms of your data over time, identify trends and patterns in your data, and see how they change over time. Each column within a heatmap represents a single histogram with different colored cells representing the different densities of observation of a given value within that histogram. Important This specific workflow is for the heatmaps in Grafana version 9.2.10 and later on RHEL8. Prerequisites PCP Redis is configured. For more information see Configuring PCP Redis . The grafana-server is accessible. For more information see Accessing the Grafana Web UI . The PCP Redis data source is configured. For more information see Creating panels and alerts in PCP Redis data source . Procedure Hover the cursor over the Dashboards tab and click + New dashboard . In the Add panel menu, click Add a new panel . In the Query tab: Select PCP Redis from the query list instead of the selected default option. In the text field of A , enter a metric, for example, kernel.all.load to visualize the kernel load graph. Click the visualization dropdown menu, which is set to Time series by default, and then click Heatmap . Optional: In the Panel Options dropdown menu, add a Panel Title and Description . In the Heatmap dropdown menu, under the Calculate from data setting, click Yes . Heatmap Optional: In the Colors dropdown menu, change the Scheme from the default Orange and select the number of steps (color shades). Optional: In the Tooltip dropdown menu, under the Show histogram (Y Axis) setting, click the toggle to display a cell's position within its specific histogram when hovering your cursor over a cell in the heatmap. For example: Show histogram (Y Axis) cell display 10.13. Troubleshooting Grafana issues It is sometimes neccesary to troubleshoot Grafana issues, such as, Grafana does not display any data, the dashboard is black, or similar issues. Procedure Verify that the pmlogger service is up and running by executing the following command: Verify if files were created or modified to the disk by executing the following command: Verify that the pmproxy service is running by executing the following command: Verify that pmproxy is running, time series support is enabled, and a connection to Redis is established by viewing the /var/log/pcp/pmproxy/pmproxy.log file and ensure that it contains the following text: Here, 1716 is the PID of pmproxy, which will be different for every invocation of pmproxy . Verify if the Redis database contains any keys by executing the following command: Verify if any PCP metrics are in the Redis database and pmproxy is able to access them by executing the following commands: Verify if there are any errors in the Grafana logs by executing the following command:
|
[
"yum install pcp-zeroconf",
"pcp | grep pmlogger pmlogger: primary logger: /var/log/pcp/pmlogger/ localhost.localdomain/20200401.00.12",
"yum install grafana grafana-pcp",
"systemctl restart grafana-server systemctl enable grafana-server",
"firewall-cmd --permanent --add-service=grafana success firewall-cmd --reload success",
"ss -ntlp | grep 3000 LISTEN 0 128 *:3000 *:* users:((\"grafana-server\",pid=19522,fd=7))",
"grafana-cli plugins ls | grep performancecopilot-pcp-app performancecopilot-pcp-app @ 3.1.0",
"yum module install redis:6",
"systemctl start pmproxy redis systemctl enable pmproxy redis",
"systemctl restart grafana-server",
"pmseries disk.dev.read 2eb3e58d8f1e231361fb15cf1aa26fe534b4d9df",
"vi /etc/grafana/grafana.ini [smtp] enabled = true from_address = [email protected]",
"systemctl restart grafana-server.service",
"yum install cyrus-sasl-scram cyrus-sasl-lib",
"vi /etc/sasl2/pmcd.conf mech_list: scram-sha-256 sasldb_path: /etc/pcp/passwd.db",
"useradd -r metrics",
"saslpasswd2 -a pmcd metrics Password: Again (for verification):",
"chown root:pcp /etc/pcp/passwd.db chmod 640 /etc/pcp/passwd.db",
"systemctl restart pmcd",
"pminfo -f -h \"pcp://127.0.0.1?username= metrics \" disk.dev.read Password: disk.dev.read inst [0 or \"sda\"] value 19540",
"yum install pcp-pmda-bpftrace",
"vi /var/lib/pcp/pmdas/bpftrace/bpftrace.conf [dynamic_scripts] enabled = true auth_enabled = true allowed_users = root, metrics",
"cd /var/lib/pcp/pmdas/bpftrace/ ./Install Updating the Performance Metrics Name Space (PMNS) Terminate PMDA if already installed Updating the PMCD control file, and notifying PMCD Check bpftrace metrics have appeared ... 7 metrics and 6 values",
"cd /var/lib/pcp/pmdas/bcc ./Install [Wed Apr 1 00:27:48] pmdabcc(22341) Info: Initializing, currently in 'notready' state. [Wed Apr 1 00:27:48] pmdabcc(22341) Info: Enabled modules: [Wed Apr 1 00:27:48] pmdabcc(22341) Info: ['biolatency', 'sysfork', [...] Updating the Performance Metrics Name Space (PMNS) Terminate PMDA if already installed Updating the PMCD control file, and notifying PMCD Check bcc metrics have appeared ... 1 warnings, 1 metrics and 0 values",
"systemctl status pmlogger",
"ls /var/log/pcp/pmlogger/USD(hostname)/ -rlt total 4024 -rw-r--r--. 1 pcp pcp 45996 Oct 13 2019 20191013.20.07.meta.xz -rw-r--r--. 1 pcp pcp 412 Oct 13 2019 20191013.20.07.index -rw-r--r--. 1 pcp pcp 32188 Oct 13 2019 20191013.20.07.0.xz -rw-r--r--. 1 pcp pcp 44756 Oct 13 2019 20191013.20.30-00.meta.xz [..]",
"systemctl status pmproxy",
"pmproxy(1716) Info: Redis slots, command keys, schema version setup",
"redis-cli dbsize (integer) 34837",
"pmseries disk.dev.read 2eb3e58d8f1e231361fb15cf1aa26fe534b4d9df pmseries \"disk.dev.read[count:10]\" 2eb3e58d8f1e231361fb15cf1aa26fe534b4d9df [Mon Jul 26 12:21:10.085468000 2021] 117971 70e83e88d4e1857a3a31605c6d1333755f2dd17c [Mon Jul 26 12:21:00.087401000 2021] 117758 70e83e88d4e1857a3a31605c6d1333755f2dd17c [Mon Jul 26 12:20:50.085738000 2021] 116688 70e83e88d4e1857a3a31605c6d1333755f2dd17c [...]",
"redis-cli --scan --pattern \"*USD(pmseries 'disk.dev.read')\" pcp:metric.name:series:2eb3e58d8f1e231361fb15cf1aa26fe534b4d9df pcp:values:series:2eb3e58d8f1e231361fb15cf1aa26fe534b4d9df pcp:desc:series:2eb3e58d8f1e231361fb15cf1aa26fe534b4d9df pcp:labelvalue:series:2eb3e58d8f1e231361fb15cf1aa26fe534b4d9df pcp:instances:series:2eb3e58d8f1e231361fb15cf1aa26fe534b4d9df pcp:labelflags:series:2eb3e58d8f1e231361fb15cf1aa26fe534b4d9df",
"journalctl -e -u grafana-server -- Logs begin at Mon 2021-07-26 11:55:10 IST, end at Mon 2021-07-26 12:30:15 IST. -- Jul 26 11:55:17 localhost.localdomain systemd[1]: Starting Grafana instance Jul 26 11:55:17 localhost.localdomain grafana-server[1171]: t=2021-07-26T11:55:17+0530 lvl=info msg=\"Starting Grafana\" logger=server version=7.3.6 c> Jul 26 11:55:17 localhost.localdomain grafana-server[1171]: t=2021-07-26T11:55:17+0530 lvl=info msg=\"Config loaded from\" logger=settings file=/usr/s> Jul 26 11:55:17 localhost.localdomain grafana-server[1171]: t=2021-07-26T11:55:17+0530 lvl=info msg=\"Config loaded from\" logger=settings file=/etc/g> [...]"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/monitoring_and_managing_system_status_and_performance/setting-up-graphical-representation-of-pcp-metrics_monitoring-and-managing-system-status-and-performance
|
Release notes for Eclipse Temurin 11.0.21
|
Release notes for Eclipse Temurin 11.0.21 Red Hat build of OpenJDK 11 Red Hat Customer Content Services
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_eclipse_temurin_11.0.21/index
|
Chapter 3. Getting started
|
Chapter 3. Getting started This chapter guides you through the steps to set up your environment and run a simple messaging program. 3.1. Prerequisites To build the example, Maven must be configured to use the Red Hat repository or a local repository . You must install the examples . You must have a message broker listening for connections on localhost . It must have anonymous access enabled. For more information, see Starting the broker . You must have a queue named queue . For more information, see Creating a queue . 3.2. Running Hello World The Hello World example creates a connection to the broker, sends a message containing a greeting to the queue queue, and receives it back. On success, it prints the received message to the console. Procedure Use Maven to build the examples by running the following command in the <source-dir> /qpid-jms-examples directory: USD mvn clean package dependency:copy-dependencies -DincludeScope=runtime -DskipTests The addition of dependency:copy-dependencies results in the dependencies being copied into the target/dependency directory. Use the java command to run the example. On Linux or UNIX: USD java -cp "target/classes:target/dependency/*" org.apache.qpid.jms.example.HelloWorld On Windows: > java -cp "target\classes;target\dependency\*" org.apache.qpid.jms.example.HelloWorld For example, running it on Linux results in the following output: USD java -cp "target/classes/:target/dependency/*" org.apache.qpid.jms.example.HelloWorld Hello world! The source code for the example is in the <source-dir> /qpid-jms-examples/src/main/java directory. The JNDI and logging configuration is in the <source-dir> /qpid-jms-examples/src/main/resources directory.
|
[
"mvn clean package dependency:copy-dependencies -DincludeScope=runtime -DskipTests",
"java -cp \"target/classes:target/dependency/*\" org.apache.qpid.jms.example.HelloWorld",
"> java -cp \"target\\classes;target\\dependency\\*\" org.apache.qpid.jms.example.HelloWorld",
"java -cp \"target/classes/:target/dependency/*\" org.apache.qpid.jms.example.HelloWorld Hello world!"
] |
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_the_amq_jms_client/getting_started
|
6.5. Using verdict maps in nftables commands
|
6.5. Using verdict maps in nftables commands Verdict maps, which are also known as dictionaries, enable nft to perform an action based on packet information by mapping match criteria to an action. 6.5.1. Using anonymous maps in nftables An anonymous map is a { match_criteria : action } statement that you use directly in a rule. The statement can contain multiple comma-separated mappings. The drawback of an anonymous map is that if you want to change the map, you must replace the rule. For a dynamic solution, use named maps as described in Section 6.5.2, "Using named maps in nftables" . The example describes how to use an anonymous map to route both TCP and UDP packets of the IPv4 and IPv6 protocol to different chains to count incoming TCP and UDP packets separately. Procedure 6.15. Using anonymous maps in nftables Create the example_table : Create the tcp_packets chain in example_table : Add a rule to tcp_packets that counts the traffic in this chain: Create the udp_packets chain in example_table : Add a rule to udp_packets that counts the traffic in this chain: Create a chain for incoming traffic. For example, to create a chain named incoming_traffic in example_table that filters incoming traffic: Add a rule with an anonymous map to incoming_traffic : The anonymous map distinguishes the packets and sends them to the different counter chains based on their protocol. To list the traffic counters, display example_table : The counters in the tcp_packets and udp_packets chain display both the number of received packets and bytes. 6.5.2. Using named maps in nftables The nftables framework supports named maps. You can use these maps in multiple rules within a table. Another benefit over anonymous maps is that you can update a named map without replacing the rules that use it. When you create a named map, you must specify the type of elements: ipv4_addr for a map whose match part contains an IPv4 address, such as 192.0.2.1 . ipv6_addr for a map whose match part contains an IPv6 address, such as 2001:db8:1::1 . ether_addr for a map whose match part contains a media access control ( MAC ) address, such as 52:54:00:6b:66:42 . inet_proto for a map whose match part contains an Internet protocol type, such as tcp . inet_service for a map whose match part contains an Internet services name port number, such as ssh or 22 . mark for a map whose match part contains a packet mark. A packet mark can be any positive 32-bit integer value ( 0 to 2147483647 ). counter for a map whose match part contains a counter value. The counter value can be any positive 64-bit integer value. quota for a map whose match part contains a quota value. The quota value can be any positive 64-bit integer value. The example describes how to allow or drop incoming packets based on their source IP address. Using a named map, you require only a single rule to configure this scenario while the IP addresses and actions are dynamically stored in the map. The procedure also describes how to add and remove entries from the map. Procedure 6.16. Using named maps in nftables Create a table. For example, to create a table named example_table that processes IPv4 packets: Create a chain. For example, to create a chain named example_chain in example_table : Important To avoid that the shell interprets the semicolons as the end of the command, you must escape the semicolons with a backslash. Create an empty map. For example, to create a map for IPv4 addresses: Create rules that use the map. For example, the following command adds a rule to example_chain in example_table that applies actions to IPv4 addresses which are both defined in example_map : Add IPv4 addresses and corresponding actions to example_map : This example defines the mappings of IPv4 addresses to actions. In combination with the rule created above, the firewall accepts packet from 192.0.2.1 and drops packets from 192.0.2.2 . Optionally, enhance the map by adding another IP address and action statement: Optionally, remove an entry from the map: Optionally, display the rule set: 6.5.3. Related information For further details about verdict maps, see the Maps section in the nft(8) man page.
|
[
"nft add table inet example_table",
"nft add chain inet example_table tcp_packets",
"nft add rule inet example_table tcp_packets counter",
"nft add chain inet example_table udp_packets",
"nft add rule inet example_table udp_packets counter",
"nft add chain inet example_table incoming_traffic { type filter hook input priority 0 \\; }",
"nft add rule inet example_table incoming_traffic ip protocol vmap { tcp : jump tcp_packets, udp : jump udp_packets }",
"nft list table inet example_table table inet example_table { chain tcp_packets { counter packets 36379 bytes 2103816 } chain udp_packets { counter packets 10 bytes 1559 } chain incoming_traffic { type filter hook input priority filter; policy accept; ip protocol vmap { tcp : jump tcp_packets, udp : jump udp_packets } } }",
"nft add table ip example_table",
"nft add chain ip example_table example_chain { type filter hook input priority 0 \\; }",
"nft add map ip example_table example_map { type ipv4_addr : verdict \\; }",
"nft add rule example_table example_chain ip saddr vmap @example_map",
"nft add element ip example_table example_map { 192.0.2.1 : accept, 192.0.2.2 : drop }",
"nft add element ip example_table example_map { 192.0.2.3 : accept }",
"nft delete element ip example_table example_map { 192.0.2.1 }",
"nft list ruleset table ip example_table { map example_map { type ipv4_addr : verdict elements = { 192.0.2.2 : drop, 192.0.2.3 : accept } } chain example_chain { type filter hook input priority filter; policy accept; ip saddr vmap @example_map } }"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/security_guide/sec-using_verdict_maps_in_nftables_commands
|
Making open source more inclusive
|
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
| null |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/4.16_release_notes/making-open-source-more-inclusive
|
Chapter 2. Configuring the OpenShift Pipelines tkn CLI
|
Chapter 2. Configuring the OpenShift Pipelines tkn CLI Configure the Red Hat OpenShift Pipelines tkn CLI to enable tab completion. 2.1. Enabling tab completion After you install the tkn CLI, you can enable tab completion to automatically complete tkn commands or suggest options when you press Tab. Prerequisites You must have the tkn CLI tool installed. You must have bash-completion installed on your local system. Procedure The following procedure enables tab completion for Bash. Save the Bash completion code to a file: USD tkn completion bash > tkn_bash_completion Copy the file to /etc/bash_completion.d/ : USD sudo cp tkn_bash_completion /etc/bash_completion.d/ Alternatively, you can save the file to a local directory and source it from your .bashrc file instead. Tab completion is enabled when you open a new terminal.
|
[
"tkn completion bash > tkn_bash_completion",
"sudo cp tkn_bash_completion /etc/bash_completion.d/"
] |
https://docs.redhat.com/en/documentation/red_hat_openshift_pipelines/1.15/html/pipelines_cli_tkn_reference/op-configuring-tkn
|
Working with DNS in Identity Management
|
Working with DNS in Identity Management Red Hat Enterprise Linux 9 Managing the IdM-integrated DNS service Red Hat Customer Content Services
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/working_with_dns_in_identity_management/index
|
5.249. postgresql and postgresql84
|
5.249. postgresql and postgresql84 5.249.1. RHSA-2012:1037 - Moderate: postgresql and postgresql84 security update Updated postgresql84 and postgresql packages that fix two security issues are now available for Red Hat Enterprise Linux 5 and 6 respectively. The Red Hat Security Response Team has rated this update as having moderate security impact. Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE links associated with each description below. PostgreSQL is an advanced object-relational database management system (DBMS). Security Fixes CVE-2012-2143 A flaw was found in the way the crypt() password hashing function from the optional PostgreSQL pgcrypto contrib module performed password transformation when used with the DES algorithm. If the password string to be hashed contained the 0x80 byte value, the remainder of the string was ignored when calculating the hash, significantly reducing the password strength. This made brute-force guessing more efficient as the whole password was not required to gain access to protected resources. CVE-2012-2655 Note: With this update, the rest of the string is properly included in the DES hash; therefore, any previously stored password values that are affected by this issue will no longer match. In such cases, it will be necessary for those stored password hashes to be updated. A denial of service flaw was found in the way the PostgreSQL server performed a user privileges check when applying SECURITY DEFINER or SET attributes to a procedural language's (such as PL/Perl or PL/Python) call handler function. A non-superuser database owner could use this flaw to cause the PostgreSQL server to crash due to infinite recursion. Upstream acknowledges Rubin Xu and Joseph Bonneau as the original reporters of the CVE-2012-2143 issue. These updated packages upgrade PostgreSQL to version 8.4.12, which fixes these issues as well as several non-security issues. Refer to the PostgreSQL Release Notes for a full list of changes: http://www.postgresql.org/docs/8.4/static/release.html All PostgreSQL users are advised to upgrade to these updated packages, which correct these issues. If the postgresql service is running, it will be automatically restarted after installing this update. 5.249.2. RHSA-2012:1263 - Moderate: postgresql and postgresql84 security update Updated postgresql84 and postgresql packages that fix two security issues are now available for Red Hat Enterprise Linux 5 and 6 respectively. The Red Hat Security Response Team has rated this update as having moderate security impact. Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. PostgreSQL is an advanced object-relational database management system (DBMS). Security Fixes CVE-2012-3488 It was found that the optional PostgreSQL xml2 contrib module allowed local files and remote URLs to be read and written to with the privileges of the database server when parsing Extensible Stylesheet Language Transformations (XSLT). An unprivileged database user could use this flaw to read and write to local files (such as the database's configuration files) and remote URLs they would otherwise not have access to by issuing a specially-crafted SQL query. CVE-2012-3489 It was found that the "xml" data type allowed local files and remote URLs to be read with the privileges of the database server to resolve DTD and entity references in the provided XML. An unprivileged database user could use this flaw to read local files they would otherwise not have access to by issuing a specially-crafted SQL query. Note that the full contents of the files were not returned, but portions could be displayed to the user via error messages. Red Hat would like to thank the PostgreSQL project for reporting these issues. Upstream acknowledges Peter Eisentraut as the original reporter of CVE-2012-3488, and Noah Misch as the original reporter of CVE-2012-3489. These updated packages upgrade PostgreSQL to version 8.4.13. Refer to the PostgreSQL Release Notes for a list of changes: All PostgreSQL users are advised to upgrade to these updated packages, which correct these issues. If the postgresql service is running, it will be automatically restarted after installing this update.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/postgresql_and_postgresql84
|
Chapter 3. Editing the inventory file
|
Chapter 3. Editing the inventory file Edit the inventory file to specify an installation of automation hub and update the required parameters. Navigate to the installer. [bundled installer] USD cd ansible-automation-platform-setup-bundle-<latest-version> [online installer] USD cd ansible-automation-platform-setup-<latest-version> Open the inventory file with a text editor. Edit the inventory file parameters to specify an installation of automation hub host only. Refer to the following example. Leave [automationcontroller] inventory information empty . Add [automationhub] group host information. Note Provide a reachable IP address for the [automationhub] host to ensure users can sync content from private automation hub from a different node. Update the values for automationhub_admin_password and automationhub_pg_password and any additional parameters based on your installation specifications: Example 3.1. Connecting automation hub to a Red Hat Single Sign-On environment To connect automation hub to a Red Hat Single Sign-On installation, configure inventory variables in the inventory file before you run the installer setup script. You must configure a different set of variables when connecting to a Red Hat Single Sign-On installation managed by Ansible Automation Platform than when connecting to an external Red Hat Single Sign-On installation. 3.1.1. Inventory file variables for connecting automation hub to a Red Hat Single Sign-On instance If you are installing automation hub and Red Hat Single Sign-On together for the first time or you have an existing Ansible Automation Platform managed Red Hat Single Sign-On, configure the variables for Ansible Automation Platform managed Red Hat Single Sign-On. If you are installing automation hub and you intend to connect it to an existing externally managed Red Hat Single Sign-On instance, configure the variables for externally managed Red Hat Single Sign-On. For more information about these inventory variables, refer to Red Hat Single Sign-On variables in the Red Hat Ansible Automation Platform Installation Guide . The following variables can be configured for both Ansible Automation Platform managed and external Red Hat Single Sign-On: Variable Required or optional sso_console_admin_password Required sso_console_admin_username Optional sso_use_https Optional sso_redirect_host Optional sso_ssl_validate_certs Optional sso_automation_platform_realm Optional sso_automation_platform_realm_displayname Optional sso_automation_platform_login_theme Optional The following variables can be configured for Ansible Automation Platform managed Red Hat Single Sign-On only: Variable Required or optional sso_keystore_password Required only if sso_use_https = true sso_custom_keystore_file Optional sso_keystore_file_remote Optional sso_keystore_name Optional The following variable can be configured for external Red Hat Single Sign-On only: Variable Description sso_host Required
|
[
"cd ansible-automation-platform-setup-bundle-<latest-version>",
"cd ansible-automation-platform-setup-<latest-version>",
"[automationcontroller] [automationhub] <reachable-ip> ansible_connection=local [all:vars] automationhub_admin_password= <PASSWORD> automationhub_pg_host='' automationhub_pg_port='' automationhub_pg_database='automationhub' automationhub_pg_username='automationhub' automationhub_pg_password=<PASSWORD> automationhub_pg_sslmode='prefer' The default install will deploy a TLS enabled Automation Hub. If for some reason this is not the behavior wanted one can disable TLS enabled deployment. # automationhub_disable_https = False The default install will generate self-signed certificates for the Automation Hub service. If you are providing valid certificate via automationhub_ssl_cert and automationhub_ssl_key, one should toggle that value to True. # automationhub_ssl_validate_certs = False SSL-related variables If set, this will install a custom CA certificate to the system trust store. custom_ca_cert=/path/to/ca.crt Certificate and key to install in Automation Hub node automationhub_ssl_cert=/path/to/automationhub.cert automationhub_ssl_key=/path/to/automationhub.key"
] |
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/installing_and_upgrading_private_automation_hub/editing_the_inventory_file
|
Chapter 1. About Red Hat CodeReady Workspaces
|
Chapter 1. About Red Hat CodeReady Workspaces Red Hat CodeReady Workspaces is a web-based integrated development environment (IDE). CodeReady Workspaces runs in OpenShift and is well-suited for container-based development. CodeReady Workspaces provides: an enterprise-level cloud developer workspace server a browser-based IDE ready-to-use developer stacks for popular programming languages, frameworks, and Red Hat technologies Red Hat CodeReady Workspaces 2.15 is based on Eclipse Che 7.42. 1.1. Supported deployment environments This section describes the availability and the supported installation methods of CodeReady Workspaces 2.15 on OpenShift Container Platform 4.10 4.8, 3.11, and OpenShift Dedicated. Table 1.1. Supported deployment environments for CodeReady Workspaces 2.15 on OpenShift Container Platform and OpenShift Dedicated Platform Architecture Deployment method OpenShift Container Platform 3.11 AMD64 and Intel 64 (x86_64) crwctl OpenShift Container Platform 4.8 to 4.10 AMD64 and Intel 64 (x86_64) OperatorHub, crwctl OpenShift Container Platform 4.8 to 4.10 IBM Z (s390x) OperatorHub, crwctl OpenShift Container Platform 4.8 to 4.10 IBM Power (ppc64le) OperatorHub, crwctl OpenShift Dedicated 4.10 AMD64 and Intel 64 (x86_64) Add-on service Red Hat OpenShift Service on AWS (ROSA) AMD64 and Intel 64 (x86_64) Add-on service Additional resources Installing CodeReady Workspaces from Operator Hub on OpenShift 4.10 . Installing CodeReady Workspaces on OpenShift Container Platform 3.11 . 1.2. Support policy For Red Hat CodeReady Workspaces 2.15, Red Hat will provide support for deployment, configuration, and use of the product. CodeReady Workspaces 2.15 has been tested on Chrome version 94.0.4606.81 (Official Build) (64-bit). Additional resources CodeReady Workspaces life-cycle and support policy . 1.3. Differences between Eclipse Che and Red Hat CodeReady Workspaces The main differences between CodeReady Workspaces and Eclipse Che are: CodeReady Workspaces is built on RHEL8 to ensure the latest security fixes are included, compared to Alpine distributions that take a longer time to update. CodeReady Workspaces uses Red Hat Single Sign-On (RH-SSO) rather than the upstream project Keycloak. CodeReady Workspaces provides a smaller supported subset of plug-ins compared to Che. CodeReady Workspaces provides devfiles for working with other Red Hat technologies such as EAP and Fuse. CodeReady Workspaces is supported on OpenShift Container Platform and OpenShift Dedicated; Eclipse Che can run on other Kubernetes clusters. Red Hat provides licensing, packaging, and support. Therefore, CodeReady Workspaces is considered a more stable product than the upstream Eclipse Che project.
| null |
https://docs.redhat.com/en/documentation/red_hat_codeready_workspaces/2.15/html/release_notes_and_known_issues/about-codeready-workspaces_crw
|
Chapter 71. security
|
Chapter 71. security This chapter describes the commands under the security command. 71.1. security group create Create a new security group Usage: Table 71.1. Positional arguments Value Summary <name> New security group name Table 71.2. Command arguments Value Summary -h, --help Show this help message and exit --description <description> Security group description --project <project> Owner's project (name or id) --stateful Security group is stateful (default) --stateless Security group is stateless --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --tag <tag> Tag to be added to the security group (repeat option to set multiple tags) --no-tag No tags associated with the security group Table 71.3. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 71.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 71.5. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 71.6. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 71.2. security group delete Delete security group(s) Usage: Table 71.7. Positional arguments Value Summary <group> Security group(s) to delete (name or id) Table 71.8. Command arguments Value Summary -h, --help Show this help message and exit 71.3. security group list List security groups Usage: Table 71.9. Command arguments Value Summary -h, --help Show this help message and exit --project <project> List security groups according to the project (name or ID) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --tags <tag>[,<tag>,... ] List security group which have all given tag(s) (Comma-separated list of tags) --any-tags <tag>[,<tag>,... ] List security group which have any given tag(s) (Comma-separated list of tags) --not-tags <tag>[,<tag>,... ] Exclude security group which have all given tag(s) (Comma-separated list of tags) --not-any-tags <tag>[,<tag>,... ] Exclude security group which have any given tag(s) (Comma-separated list of tags) Table 71.10. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 71.11. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 71.12. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 71.13. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 71.4. security group rule create Create a new security group rule Usage: Table 71.14. Positional arguments Value Summary <group> Create rule in this security group (name or id) Table 71.15. Command arguments Value Summary -h, --help Show this help message and exit --remote-ip <ip-address> Remote ip address block (may use cidr notation; default for IPv4 rule: 0.0.0.0/0, default for IPv6 rule: ::/0) --remote-group <group> Remote security group (name or id) --remote-address-group <group> Remote address group (name or id) --dst-port <port-range> Destination port, may be a single port or a starting and ending port range: 137:139. Required for IP protocols TCP and UDP. Ignored for ICMP IP protocols. --protocol <protocol> Ip protocol (ah, dccp, egp, esp, gre, icmp, igmp, ipv6-encap, ipv6-frag, ipv6-icmp, ipv6-nonxt, ipv6-opts, ipv6-route, ospf, pgm, rsvp, sctp, tcp, udp, udplite, vrrp and integer representations [0-255] or any; default: any (all protocols)) --description <description> Set security group rule description --icmp-type <icmp-type> Icmp type for icmp ip protocols --icmp-code <icmp-code> Icmp code for icmp ip protocols --ingress Rule applies to incoming network traffic (default) --egress Rule applies to outgoing network traffic --ethertype <ethertype> Ethertype of network traffic (ipv4, ipv6; default: based on IP protocol) --project <project> Owner's project (name or id) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. Table 71.16. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 71.17. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 71.18. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 71.19. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 71.5. security group rule delete Delete security group rule(s) Usage: Table 71.20. Positional arguments Value Summary <rule> Security group rule(s) to delete (id only) Table 71.21. Command arguments Value Summary -h, --help Show this help message and exit 71.6. security group rule list List security group rules Usage: Table 71.22. Positional arguments Value Summary <group> List all rules in this security group (name or id) Table 71.23. Command arguments Value Summary -h, --help Show this help message and exit --protocol <protocol> List rules by the ip protocol (ah, dhcp, egp, esp, gre, icmp, igmp, ipv6-encap, ipv6-frag, ipv6-icmp, ipv6-nonxt, ipv6-opts, ipv6-route, ospf, pgm, rsvp, sctp, tcp, udp, udplite, vrrp and integer representations [0-255] or any; default: any (all protocols)) --ethertype <ethertype> List rules by the ethertype (ipv4 or ipv6) --ingress List rules applied to incoming network traffic --egress List rules applied to outgoing network traffic --long deprecated this argument is no longer needed Table 71.24. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 71.25. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 71.26. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 71.27. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 71.7. security group rule show Display security group rule details Usage: Table 71.28. Positional arguments Value Summary <rule> Security group rule to display (id only) Table 71.29. Command arguments Value Summary -h, --help Show this help message and exit Table 71.30. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 71.31. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 71.32. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 71.33. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 71.8. security group set Set security group properties Usage: Table 71.34. Positional arguments Value Summary <group> Security group to modify (name or id) Table 71.35. Command arguments Value Summary -h, --help Show this help message and exit --name <new-name> New security group name --description <description> New security group description --stateful Security group is stateful (default) --stateless Security group is stateless --tag <tag> Tag to be added to the security group (repeat option to set multiple tags) --no-tag Clear tags associated with the security group. specify both --tag and --no-tag to overwrite current tags 71.9. security group show Display security group details Usage: Table 71.36. Positional arguments Value Summary <group> Security group to display (name or id) Table 71.37. Command arguments Value Summary -h, --help Show this help message and exit Table 71.38. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 71.39. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 71.40. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 71.41. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 71.10. security group unset Unset security group properties Usage: Table 71.42. Positional arguments Value Summary <group> Security group to modify (name or id) Table 71.43. Command arguments Value Summary -h, --help Show this help message and exit --tag <tag> Tag to be removed from the security group (repeat option to remove multiple tags) --all-tag Clear all tags associated with the security group
|
[
"openstack security group create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--description <description>] [--project <project>] [--stateful | --stateless] [--project-domain <project-domain>] [--tag <tag> | --no-tag] <name>",
"openstack security group delete [-h] <group> [<group> ...]",
"openstack security group list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--project <project>] [--project-domain <project-domain>] [--tags <tag>[,<tag>,...]] [--any-tags <tag>[,<tag>,...]] [--not-tags <tag>[,<tag>,...]] [--not-any-tags <tag>[,<tag>,...]]",
"openstack security group rule create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--remote-ip <ip-address> | --remote-group <group> | --remote-address-group <group>] [--dst-port <port-range>] [--protocol <protocol>] [--description <description>] [--icmp-type <icmp-type>] [--icmp-code <icmp-code>] [--ingress | --egress] [--ethertype <ethertype>] [--project <project>] [--project-domain <project-domain>] <group>",
"openstack security group rule delete [-h] <rule> [<rule> ...]",
"openstack security group rule list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--protocol <protocol>] [--ethertype <ethertype>] [--ingress | --egress] [--long] [<group>]",
"openstack security group rule show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <rule>",
"openstack security group set [-h] [--name <new-name>] [--description <description>] [--stateful | --stateless] [--tag <tag>] [--no-tag] <group>",
"openstack security group show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <group>",
"openstack security group unset [-h] [--tag <tag> | --all-tag] <group>"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/command_line_interface_reference/security
|
Chapter 6. Red Hat Directory Server 12.3
|
Chapter 6. Red Hat Directory Server 12.3 Learn about new system requirements, important updates and new features, known issues, and deprecated functionality implemented in Directory Server 12.3. 6.1. Important updates and new features Learn about new features and important updates in Red Hat Directory Server 12.3. Directory Server now backs up configuration files, the certificate database, and custom schema files Previously, Directory Server backed up only databases. With this update, when you run dsconf backup create or dsctl db2bak command, Directory Server also backs up configuration files, the certificate database, and custom schema files that are stored in the /etc/dirsrv/slapd- instance_name / directory to the backup default directory /var/lib/dirsrv/slapd- instance_name /bak/config_files/ . Directory Server also backs up these files when you perform the backup by using the web console. (BZ#2147446) The Alias Entries plug-in is now available in Directory Server When you enable the Alias Entries plug-in, a search for an entry returns the entry that you set as an aliased entry. For example, Barbara Jensen, an employee in the Example company, got married and her surname changed. Her old entry uid=bjensen,ou=people,dc=example,dc=com contains the alias to her new entry uid=bsmith,ou=people,dc=example,dc=com . When the plug-in is enabled, the search for the uid=bjensen,ou=people,dc=example,dc=com entry returns the uid=bsmith,ou=people,dc=example,dc=com entry information. Use the -a find parameter for the ldapsearch command to retrieve entries with aliases. Currently, the Alias Entries plug-in supports only base level searches. For more information, see the Alias Entries plug-in description. (BZ#2203173) The checkAllStateAttrs configuration option is now available You can apply both account inactivity and password expiration when a user authenticates by using the checkAllStateAttrs setting. When you enable this parameter, it checks the main state attribute and, if the account information is correct, it then checks the alternate state attribute. (BZ#2174161) You can now save credentials and aliases for a replication report using the Directory Server web console Previously, when you used the web console to set credentials and aliases for a replication monitoring report, these settings were no longer present after the web console reload. With this enhancement, when you set the credentials and aliases for the replication report, Directory Server saves new settings in the .dsrc file and the web console uploads saved settings after the reload. (BZ#2030884) Important updates and new features in the 389-ds-base package Directory Server 12.3 features that are included in the 389-ds-base package are documented in Red Hat Enterprise Linux 9.3 Release Notes: RHEL 9.3 provides 389-ds-base 2.3.4 Directory Server can now close a client connection if a bind operation fails Automembership plug-in improvements. It no longer cleans up groups by default New passwordAdminSkipInfoUpdate : on/off configuration option is now available New slapi_memberof() plug-in function is now available for Directory Server plug-ins and client applications Directory Server now replaces the virtual attribute nsRole with an indexed attribute for managed and filtered roles New nsslapd-numlisteners configuration option is now available 6.2. Bug fixes Learn about bugs fixed in Red Hat Directory Server 12.3 that have a significant impact on users. The cockpit-389-ds package upgrade now updates the 389-ds-base and python3-lib389 packages Previously, the cockpit-389-ds package did not specify the version of the 389-ds-base package it depends on. As a result, the upgrade of the cockpit-389-ds package alone did not update the 389-ds-base and python3-lib389 packages which could lead to misalignment and compatibility issues between packages. With this update, the cockpit-389-ds package depends on the 389-ds-base exact version and the update of the cockpit-389-ds package also upgrades 389-ds-base and python3-lib389 packages. (BZ#2240021) Disabling replication on a consumer no longer crashes the server Previously, when you disabled replication on a consumer server, Directory Server tried to remove the changelog on the consumer where it did not exist. As a consequence, the server terminated unexpectedly with the following error: With this update, disabling replication on a consumer works as expected. (BZ#2184599) A non-root instance no longer fails to start after creation Previously, Rust plug-ins were incorrectly disabled in the non-root instance template and the default password scheme was moved to Rust-based hasher. As a result, the non-root instance could not be created. With this update, a non-root instance supports Rust plug-ins and you can create the instance with the PBKDF2-SHA512 default password scheme. (BZ#2151864) The dsconf utility now accepts only value 65535 as the replica-id when setting a hub or a consumer role Previously, when you configured a hub or a consumer role, the dsconf utility also accepted the replica-id option with a value other than 65535 . With this update, the dsconf utility accepts only 65535 as the replica-id value for a hub or a consumer role. If you do not specify this value in a dsconf command, then Directory Server assigns the replica-id value 65535 automatically. (BZ#1987373) The dscreate ds-root command now normalizes paths Previously, when you created an instance under a non-root user and provided a bin_dir argument value that contained a trailing slash, dscreate ds-root failed to find the bin_dir value in the USDPATH variable. As a result, the instance under a non-root user was not created. With this update, dscreate ds-root command normalizes paths, and the instance is created as expected. (BZ#2151868) The dsconf utility now has the fixup option to create fix-up tasks for the entryUUID plug-in Previously, the dsconf utility did not provide an option to create fix-up tasks for the entryUUID plug-in. As a consequence, administrators could not use dsconf to create a task to automatically add entryUUID attributes to existing entries. With this update, you can use the dsconf utility with the fixup option to create fix-up tasks for the entryUUID plug-in. For example, to fix all entries under the dn=example,dc=com entry that contain a uid attribute, enter: (BZ#2047175) Access log no longer displays an error message during Directory Server installation in FIPS mode Previously, when you installed Directory Server in FIPS mode, the access log file displayed the following error message: With this update, the issue has been fixed, and the error message is no longer present in the access log. (BZ#2153668) Directory Server 12.3 bug fixes that are included in the 389-ds-base package are documented in Red Hat Enterprise Linux 9.3 Release Notes: Paged searches from a regular user now do not impact performance The LMDB import now works faster Schema replication now works correctly in Directory Server Referral mode is now working correctly in Directory Server The dirsrv service now starts correctly after reboot Changing a security parameter now works correctly 6.3. Known issues Learn about known problems and, if applicable, workarounds in Directory Server 12.3. Directory Server can import LDIF files only from /var/lib/dirsrv/slapd- instance_name /ldif/ Since RHEL 8.3, Red Hat Directory Server (RHDS) uses its own private directories and the PrivateTmp systemd directive is enabled by default for the LDAP services. As a result, RHDS can only import LDIF files from the /var/lib/dirsrv/slapd- instance_name /ldif/ directory. If the LDIF file is stored in a different directory, such as /var/tmp , /tmp , or /root , the import fails with an error similar to the following: To work around this problem, complete the following steps: Move the LDIF file to the /var/lib/dirsrv/slapd- instance_name /ldif/ directory: Set permissions that allow the dirsrv user to read the file: Restore the SELinux context: For more information, see the solution article LDAP Service cannot access files under the host's /tmp and /var/tmp directories . (BZ#2075525) Known issues in the 389-ds-base package Red Hat Directory Server 12.3 known issues that affect 389-ds-base package are documented in Red Hat Enterprise Linux 9.3 Release Notes: When the nsslapd-numlisteners attribute value is more than 2, Directory Server fails 6.4. Deprecated functionality Learn about functionality that has been deprecated in Red Hat Directory Server 12.3. Deprecated functionality in the 389-ds-base package Directory Server 12.3 functionality that has been deprecated in the 389-ds-base package is documented in the Red Hat Enterprise Linux 9.3 Release Notes: The nsslapd-ldapimaprootdn parameter is deprecated 6.5. Removed functionality Learn about functionality that has been removed in Red Hat Directory Server 12.3. Removed functionality in the 389-ds-base package Removed functionality in Red Hat Directory Server, that are included in the 389-ds-base package, are documented in the Red Hat Enterprise Linux 9.3 Release Notes: The nsslapd-conntablesize configuration parameter has been removed from 389-ds-base
|
[
"Error: -1 - Can't contact LDAP server - []",
"dsconf instance_name plugin entryuuid fixup -f \"(uid=*)\" \" dn=example,dc=com \"",
"[time_stamp] - WARN - slapd_do_all_nss_ssl_init - ERROR: TLS is not enabled, and the machine is in FIPS mode. Some functionality won't work correctly (for example, users with PBKDF2_SHA256 password scheme won't be able to log in). It's highly advisable to enable TLS on this instance.",
"Could not open LDIF file \"/tmp/example.ldif\", errno 2 (No such file or directory)",
"mv /tmp/ example.ldif /var/lib/dirsrv/slapd- instance_name__/ldif/",
"chown dirsrv /var/lib/dirsrv/slapd- instance_name /ldif/ example.ldif",
"restorecon -Rv /var/lib/dirsrv/slapd- instance_name /ldif/"
] |
https://docs.redhat.com/en/documentation/red_hat_directory_server/12/html/red_hat_directory_server_12_release_notes/assembly_red-hat-directory-server-12-3_rhds-12-rn
|
Chapter 1. Support overview
|
Chapter 1. Support overview Red Hat offers cluster administrators tools for gathering data for your cluster, monitoring, and troubleshooting. 1.1. Get support Get support : Visit the Red Hat Customer Portal to review knowledge base articles, submit a support case, and review additional product documentation and resources. 1.2. Remote health monitoring issues Remote health monitoring issues : OpenShift Container Platform collects telemetry and configuration data about your cluster and reports it to Red Hat by using the Telemeter Client and the Insights Operator. Red Hat uses this data to understand and resolve issues in connected cluster . Similar to connected clusters, you can Use remote health monitoring in a restricted network . OpenShift Container Platform collects data and monitors health using the following: Telemetry : The Telemetry Client gathers and uploads the metrics values to Red Hat every four minutes and thirty seconds. Red Hat uses this data to: Monitor the clusters. Roll out OpenShift Container Platform upgrades. Improve the upgrade experience. Insight Operator : By default, OpenShift Container Platform installs and enables the Insight Operator, which reports configuration and component failure status every two hours. The Insight Operator helps to: Identify potential cluster issues proactively. Provide a solution and preventive action in Red Hat OpenShift Cluster Manager. You can Review telemetry information . If you have enabled remote health reporting, Use Insights to identify issues . You can optionally disable remote health reporting. 1.3. Gather data about your cluster Gather data about your cluster : Red Hat recommends gathering your debugging information when opening a support case. This helps Red Hat Support to perform a root cause analysis. A cluster administrator can use the following to gather data about your cluster: The must-gather tool : Use the must-gather tool to collect information about your cluster and to debug the issues. sosreport : Use the sosreport tool to collect configuration details, system information, and diagnostic data for debugging purposes. Cluster ID : Obtain the unique identifier for your cluster, when providing information to Red Hat Support. Bootstrap node journal logs : Gather bootkube.service journald unit logs and container logs from the bootstrap node to troubleshoot bootstrap-related issues. Cluster node journal logs : Gather journald unit logs and logs within /var/log on individual cluster nodes to troubleshoot node-related issues. A network trace : Provide a network packet trace from a specific OpenShift Container Platform cluster node or a container to Red Hat Support to help troubleshoot network-related issues. Diagnostic data : Use the redhat-support-tool command to gather(?) diagnostic data about your cluster. 1.4. Troubleshooting issues A cluster administrator can monitor and troubleshoot the following OpenShift Container Platform component issues: Installation issues : OpenShift Container Platform installation proceeds through various stages. You can perform the following: Monitor the installation stages. Determine at which stage installation issues occur. Investigate multiple installation issues. Gather logs from a failed installation. Node issues : A cluster administrator can verify and troubleshoot node-related issues by reviewing the status, resource usage, and configuration of a node. You can query the following: Kubelet's status on a node. Cluster node journal logs. Crio issues : A cluster administrator can verify CRI-O container runtime engine status on each cluster node. If you experience container runtime issues, perform the following: Gather CRI-O journald unit logs. Cleaning CRI-O storage. Operating system issues : OpenShift Container Platform runs on Red Hat Enterprise Linux CoreOS. If you experience operating system issues, you can investigate kernel crash procedures. Ensure the following: Enable kdump. Test the kdump configuration. Analyze a core dump. Network issues : To troubleshoot Open vSwitch issues, a cluster administrator can perform the following: Configure the Open vSwitch log level temporarily. Configure the Open vSwitch log level permanently. Display Open vSwitch logs. Operator issues : A cluster administrator can do the following to resolve Operator issues: Verify Operator subscription status. Check Operator pod health. Gather Operator logs. Pod issues : A cluster administrator can troubleshoot pod-related issues by reviewing the status of a pod and completing the following: Review pod and container logs. Start debug pods with root access. Source-to-image issues : A cluster administrator can observe the S2I stages to determine where in the S2I process a failure occurred. Gather the following to resolve Source-to-Image (S2I) issues: Source-to-Image diagnostic data. Application diagnostic data to investigate application failure. Storage issues : A multi-attach storage error occurs when the mounting volume on a new node is not possible because the failed node cannot unmount the attached volume. A cluster administrator can do the following to resolve multi-attach storage issues: Enable multiple attachments by using RWX volumes. Recover or delete the failed node when using an RWO volume. Monitoring issues : A cluster administrator can follow the procedures on the troubleshooting page for monitoring. If the metrics for your user-defined projects are unavailable or if Prometheus is consuming a lot of disk space, check the following: Investigate why user-defined metrics are unavailable. Determine why Prometheus is consuming a lot of disk space. Logging issues : A cluster administrator can follow the procedures on the troubleshooting page for OpenShift Logging issues. Check the following to resolve logging issues: Status of the Logging Operator . Status of the Log store . OpenShift Logging alerts . Information about your OpenShift logging environment using oc adm must-gather command . OpenShift CLI (oc) issues : Investigate OpenShift CLI (oc) issues by increasing the log level.
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/support/support-overview
|
7.216. rpcbind
|
7.216. rpcbind 7.216.1. RHBA-2013:0291 - rpcbind bug fix and enhancement update Updated rpcbind packages that fix two bugs and add one enhancement are now available for Red Hat Enterprise Linux 6. The rpcbind utility maps RPC (Remote Procedure Call) services to the ports on which the services listen and allows the host to make RPC calls to the RPC server. Bug Fixes BZ# 813898 Previously, the rpcbind(8) man page referred to rpcbind(3) for which no entry existed. This update adds the missing rpcbind(3) man page. BZ#864056 Using Reverse Address Resolution Protocol (RARP) and the bootparams file for booting Solaris or SPARC machines did not work properly. The SPARC systems sent broadcast bootparams WHOAMI requests, but the answer was never sent back by rpcbind. This bug has been fixed and rpcbind no longer discards the bootparams WHOAMI requests in the described scenario. Enhancement BZ# 731542 When using the rpcbind's insecure mode via the "-i" option, non-root local users are now allowed to set and unset calls from remote hosts. All users of rpcbind are advised to upgrade to these updated packages, which fix these bugs and add this enhancement.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/rpcbind
|
Chapter 8. Authentication APIs
|
Chapter 8. Authentication APIs 8.1. Authentication APIs 8.1.1. TokenRequest [authentication.k8s.io/v1] Description TokenRequest requests a token for a given service account. Type object 8.1.2. TokenReview [authentication.k8s.io/v1] Description TokenReview attempts to authenticate a token to a known user. Note: TokenReview requests may be cached by the webhook token authenticator plugin in the kube-apiserver. Type object 8.2. TokenRequest [authentication.k8s.io/v1] Description TokenRequest requests a token for a given service account. Type object Required spec 8.2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object TokenRequestSpec contains client provided parameters of a token request. status object TokenRequestStatus is the result of a token request. 8.2.1.1. .spec Description TokenRequestSpec contains client provided parameters of a token request. Type object Required audiences Property Type Description audiences array (string) Audiences are the intendend audiences of the token. A recipient of a token must identify themself with an identifier in the list of audiences of the token, and otherwise should reject the token. A token issued for multiple audiences may be used to authenticate against any of the audiences listed but implies a high degree of trust between the target audiences. boundObjectRef object BoundObjectReference is a reference to an object that a token is bound to. expirationSeconds integer ExpirationSeconds is the requested duration of validity of the request. The token issuer may return a token with a different validity duration so a client needs to check the 'expiration' field in a response. 8.2.1.2. .spec.boundObjectRef Description BoundObjectReference is a reference to an object that a token is bound to. Type object Property Type Description apiVersion string API version of the referent. kind string Kind of the referent. Valid kinds are 'Pod' and 'Secret'. name string Name of the referent. uid string UID of the referent. 8.2.1.3. .status Description TokenRequestStatus is the result of a token request. Type object Required token expirationTimestamp Property Type Description expirationTimestamp Time ExpirationTimestamp is the time of expiration of the returned token. token string Token is the opaque bearer token. 8.2.2. API endpoints The following API endpoints are available: /api/v1/namespaces/{namespace}/serviceaccounts/{name}/token POST : create token of a ServiceAccount 8.2.2.1. /api/v1/namespaces/{namespace}/serviceaccounts/{name}/token Table 8.1. Global path parameters Parameter Type Description name string name of the TokenRequest namespace string object name and auth scope, such as for teams and projects Table 8.2. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. pretty string If 'true', then the output is pretty printed. HTTP method POST Description create token of a ServiceAccount Table 8.3. Body parameters Parameter Type Description body TokenRequest schema Table 8.4. HTTP responses HTTP code Reponse body 200 - OK TokenRequest schema 201 - Created TokenRequest schema 202 - Accepted TokenRequest schema 401 - Unauthorized Empty 8.3. TokenReview [authentication.k8s.io/v1] Description TokenReview attempts to authenticate a token to a known user. Note: TokenReview requests may be cached by the webhook token authenticator plugin in the kube-apiserver. Type object Required spec 8.3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object TokenReviewSpec is a description of the token authentication request. status object TokenReviewStatus is the result of the token authentication request. 8.3.1.1. .spec Description TokenReviewSpec is a description of the token authentication request. Type object Property Type Description audiences array (string) Audiences is a list of the identifiers that the resource server presented with the token identifies as. Audience-aware token authenticators will verify that the token was intended for at least one of the audiences in this list. If no audiences are provided, the audience will default to the audience of the Kubernetes apiserver. token string Token is the opaque bearer token. 8.3.1.2. .status Description TokenReviewStatus is the result of the token authentication request. Type object Property Type Description audiences array (string) Audiences are audience identifiers chosen by the authenticator that are compatible with both the TokenReview and token. An identifier is any identifier in the intersection of the TokenReviewSpec audiences and the token's audiences. A client of the TokenReview API that sets the spec.audiences field should validate that a compatible audience identifier is returned in the status.audiences field to ensure that the TokenReview server is audience aware. If a TokenReview returns an empty status.audience field where status.authenticated is "true", the token is valid against the audience of the Kubernetes API server. authenticated boolean Authenticated indicates that the token was associated with a known user. error string Error indicates that the token couldn't be checked user object UserInfo holds the information about the user needed to implement the user.Info interface. 8.3.1.3. .status.user Description UserInfo holds the information about the user needed to implement the user.Info interface. Type object Property Type Description extra object Any additional information provided by the authenticator. extra{} array (string) groups array (string) The names of groups this user is a part of. uid string A unique value that identifies this user across time. If this user is deleted and another user by the same name is added, they will have different UIDs. username string The name that uniquely identifies this user among all active users. 8.3.1.4. .status.user.extra Description Any additional information provided by the authenticator. Type object 8.3.2. API endpoints The following API endpoints are available: /apis/authentication.k8s.io/v1/tokenreviews POST : create a TokenReview 8.3.2.1. /apis/authentication.k8s.io/v1/tokenreviews Table 8.5. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. pretty string If 'true', then the output is pretty printed. HTTP method POST Description create a TokenReview Table 8.6. Body parameters Parameter Type Description body TokenReview schema Table 8.7. HTTP responses HTTP code Reponse body 200 - OK TokenReview schema 201 - Created TokenReview schema 202 - Accepted TokenReview schema 401 - Unauthorized Empty
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_microshift/4.18/html/api_reference/authentication-apis-1
|
function::json_add_string_metric
|
function::json_add_string_metric Name function::json_add_string_metric - Add a string metric Synopsis Arguments name The name of the string metric. description Metric description. An empty string can be used. Description This function adds a string metric, setting up everything needed.
|
[
"json_add_string_metric:long(name:string,description:string)"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-json-add-string-metric
|
Chapter 8. Configuring IdM clients in an Active Directory DNS domain
|
Chapter 8. Configuring IdM clients in an Active Directory DNS domain If you have client systems in a DNS domain controlled by Active Directory (AD) and you require those clients to join the IdM Server to benefit from its RHEL features, you can configure users to access a client using a host name from the AD DNS domain. Important This configuration is not recommended and has limitations. Always deploy IdM clients in a DNS zone separate from the ones owned by AD and access IdM clients using their IdM host names. Your IdM client configuration depends on whether you require single sign-on with Kerberos. 8.1. Configuring an IdM client without Kerberos single sign-on Password authentication is the only authentication method that is available for users to access resources on IdM clients if the IdM clients are in an Active Directory DNS domain. Follow this procedure to configure your client without Kerberos single sign-on. Procedure Install the IdM client with the --domain=IPA_DNS_Domain option to ensure the System Security Services Daemon (SSSD) can communicate with the IdM servers: This option disables the SRV record auto-detection for the Active Directory DNS domain. Open the /etc/krb5.conf configuration file and locate the existing mapping for the Active Directory domain in the [domain_realm] section. Replace both lines with an entry mapping the fully qualified domain name (FQDN) of the Linux clients in the Active Directory DNS zone to the IdM realm: By replacing the default mapping, you prevent Kerberos from sending its requests for the Active Directory domain to the IdM Kerberos Distribution Center (KDC). Instead Kerberos uses auto-discovery through SRV DNS records to locate the KDC. 8.2. Requesting SSL certificates without single sign-on After you configure an IdM client without Kerberos single sign-on, you can set up SSL-based services. SSL-based services require a certificate with dNSName extension records that cover all system host names, because both original (A/AAAA) and CNAME records must be in the certificate. Currently, IdM only issues certificates to host objects in the IdM database. In this setup, where single sign-on is not enabled, IdM already contains a host object for the FQDN in its database. You can use certmonger to request a certificate using the FQDN. Prerequisites An IdM client configured without Kerberos single-sign on. Procedure Use certmonger to request a certificate using the FQDN: The certmonger service uses the default host key stored in the /etc/krb5.keytab file to authenticate to the IdM Certificate Authority (CA). 8.3. Configuring an IdM client with Kerberos single sign-on If you require Kerberos single sign-on to access resources on the IdM client, the client must be within the IdM DNS domain, for example idm-client.idm.example.com . You must create a CNAME record idm-client.ad.example.com in the Active Directory DNS domain pointing to the A/AAAA record of the IdM client. For Kerberos-based application servers, MIT Kerberos supports a method to allow the acceptance of any host-based principal available in the application's keytab. Procedure On the IdM client, disable the strict checks on what Kerberos principal is used to target the Kerberos server by setting the following option in the [libdefaults] section of the /etc/krb5.conf configuration file: 8.4. Requesting SSL certificates with single sign-on After you disabled strict Kerberos principal checks on your IdM client, you can set up SSL-based services. SSL-based services require a certificate with dNSName extension records that cover all system host names, because both original (A/AAAA) and CNAME records must be in the certificate. Currently, IdM only issues certificates to host objects in the IdM database. Follow this procedure to create a host object for ipa-client.example.com in IdM and make sure the real IdM machine's host object is able to manage this host. Prerequisites You have disabled the strict checks on what Kerberos principal is used to target the Kerberos server. Procedure Create a new host object on the IdM server: Use the --force option, because the host name is a CNAME and not an A/AAAA record. On the IdM server, allow the IdM DNS host name to manage the Active Directory host entry in the IdM database: Your can now request an SSL certificate for your IdM client with the dNSName extension record for its host name within the Active Directory DNS domain:
|
[
"ipa-client-install --domain=idm.example.com",
".ad.example.com = IDM.EXAMPLE.COM ad.example.com = IDM.EXAMPLE.COM",
"idm-client.ad.example.com = IDM.EXAMPLE.COM",
"ipa-getcert request -r -f /etc/httpd/alias/server.crt -k /etc/httpd/alias/server.key -N CN=ipa-client.ad.example.com -D ipa-client.ad.example.com -K host/[email protected] -U id-kp-serverAuth",
"ignore_acceptor_hostname = true",
"ipa host-add idm-client.ad.example.com --force",
"ipa host-add-managedby idm-client.ad.example.com --hosts=idm-client.idm.example.com",
"ipa-getcert request -r -f /etc/httpd/alias/server.crt -k /etc/httpd/alias/server.key -N CN=`hostname --fqdn` -D `hostname --fqdn` -D idm-client.ad.example.com -K host/[email protected] -U id-kp-serverAuth"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/installing_trust_between_idm_and_ad/assembly_configuring-idm-clients-in-an-active-directory-dns-domain_installing-trust-between-idm-and-ad
|
9.7. Data Security for Remote Client Server Mode
|
9.7. Data Security for Remote Client Server Mode 9.7.1. About Security Realms A security realm is a series of mappings between users and passwords, and users and roles. Security realms are a mechanism for adding authentication and authorization to your EJB and Web applications. Red Hat JBoss Data Grid Server provides two security realms by default: ManagementRealm stores authentication information for the Management API, which provides the functionality for the Management CLI and web-based Management Console. It provides an authentication system for managing JBoss Data Grid Server itself. You could also use the ManagementRealm if your application needed to authenticate with the same business rules you use for the Management API. ApplicationRealm stores user, password, and role information for Web Applications and EJBs. Each realm is stored in two files on the filesystem: REALM -users.properties stores usernames and hashed passwords. REALM -roles.properties stores user-to-role mappings. mgmt-groups.properties stores user-to-role mapping file for ManagementRealm . The properties files are stored in the standalone/configuration/ directories. The files are written simultaneously by the add-user.sh or add-user.bat command. When you run the command, the first decision you make is which realm to add your new user to. 23155%2C+Developer+Guide-6.630-06-2017+15%3A00%3A55JBoss+Data+Grid+6Documentation6.6.1 Report a bug 9.7.2. Add a New Security Realm Run the Management CLI. Start the cli.sh or cli.bat command and connect to the server. Create the new security realm itself. Run the following command to create a new security realm named MyDomainRealm on a domain controller or a standalone server. Create the references to the properties file which will store information about the new role. Run the following command to create a pointer a file named myfile.properties , which will contain the properties pertaining to the new role. Note The newly-created properties file is not managed by the included add-user.sh and add-user.bat scripts. It must be managed externally. Result Your new security realm is created. When you add users and roles to this new realm, the information will be stored in a separate file from the default security realms. You can manage this new file using your own applications or procedures. 23155%2C+Developer+Guide-6.630-06-2017+15%3A00%3A55JBoss+Data+Grid+6Documentation6.6.1 Report a bug 9.7.3. Add a User to a Security Realm Run the add-user.sh or add-user.bat command. Open a terminal and change directories to the JDG_HOME /bin/ directory. If you run Red Hat Enterprise Linux or another UNIX-like operating system, run add-user.sh . If you run Microsoft Windows Server, run add-user.bat . Choose whether to add a Management User or Application User. For this procedure, type b to add an Application User. Choose the realm the user will be added to. By default, the only available realm is ApplicationRealm . If you have added a custom realm, you can type its name instead. Type the username, password, and roles, when prompted. Type the desired username, password, and optional roles when prompted. Verify your choice by typing yes , or type no to cancel the changes. The changes are written to each of the properties files for the security realm. 23155%2C+Developer+Guide-6.630-06-2017+15%3A00%3A55JBoss+Data+Grid+6Documentation6.6.1 Report a bug 9.7.4. Configuring Security Realms Declaratively In Remote Client-Server mode, a Hot Rod endpoint must specify a security realm. The security realm declares an authentication and an authorization section. Example 9.10. Configuring Security Realms Declaratively The server-identities parameter can also be used to specify certificates. Report a bug 9.7.5. Loading Roles from LDAP for Authorization (Remote Client-Server Mode) An LDAP directory contains entries for user accounts and groups, cross referenced by attributes. Depending on the LDAP server configuration, a user entity may map the groups the user belongs to through memberOf attributes; a group entity may map which users belong to it through uniqueMember attributes; or both mappings may be maintained by the LDAP server. Users generally authenticate against the server using a simple user name. When searching for group membership information, depending on the directory server in use, searches could be performed using this simple name or using the distinguished name of the user's entry in the directory. The authentication step of a user connecting to the server always happens first. Once the user is successfully authenticated the server loads the user's groups. The authentication step and the authorization step each require a connection to the LDAP server. The realm optimizes this process by reusing the authentication connection for the group loading step. As will be shown within the configuration steps below it is possible to define rules within the authorization section to convert a user's simple user name to their distinguished name. The result of a "user name to distinguished name mapping" search during authentication is cached and reused during the authorization query when the force attribute is set to "false". When force is true, the search is performed again during authorization (while loading groups). This is typically done when different servers perform authentication and authorization. Important These examples specify some attributes with their default values. This is done for demonstration. Attributes that specify their default values are removed from the configuration when it is persisted by the server. The exception is the force attribute. It is required, even when set to the default value of false . username-to-dn The username-to-dn element specifies how to map the user name to the distinguished name of their entry in the LDAP directory. This element is only required when both of the following are true: The authentication and authorization steps are against different LDAP servers. The group search uses the distinguished name. 1:1 username-to-dn This specifies that the user name entered by the remote user is the user's distinguished name. This defines a 1:1 mapping and there is no additional configuration. username-filter The option is very similar to the simple option described above for the authentication step. A specified attribute is searched for a match against the supplied user name. The attributes that can be set here are: base-dn : The distinguished name of the context to begin the search. recursive : Whether the search will extend to sub contexts. Defaults to false . attribute : The attribute of the users entry to try and match against the supplied user name. Defaults to uid . user-dn-attribute : The attribute to read to obtain the users distinguished name. Defaults to dn . advanced-filter The final option is to specify an advanced filter, as in the authentication section this is an opportunity to use a custom filter to locate the users distinguished name. For the attributes that match those in the username-filter example, the meaning and default values are the same. There is one new attribute: filter : Custom filter used to search for a user's entry where the user name will be substituted in the {0} place holder. Important The XML must remain valid after the filter is defined so if any special characters are used such as & ensure the proper form is used. For example & for the & character. The Group Search There are two different styles that can be used when searching for group membership information. The first style is where the user's entry contains an attribute that references the groups the user is a member of. The second style is where the group contains an attribute referencing the users entry. When there is a choice of which style to use Red Hat recommends that the configuration for a user's entry referencing the group is used. This is because with this method group information can be loaded by reading attributes of known distinguished names without having to perform any searches. The other approach requires extensive searches to identify the groups that reference the user. Before describing the configuration here are some LDIF examples to illustrate this. Example 9.11. Principal to Group - LDIF example. This example illustrates where we have a user TestUserOne who is a member of GroupOne , GroupOne is in turn a member of GroupFive . The group membership is shown by the use of a memberOf attribute which is set to the distinguished name of the group of which the user (or group) is a member. It is not shown here but a user could potentially have multiple memberOf attributes set, one for each group of which the user is directly a member. Example 9.12. Group to Principal - LDIF Example This example shows the same user TestUserOne who is a member of GroupOne which is in turn a member of GroupFive - however in this case it is an attribute uniqueMember from the group to the user being used for the cross reference. Again the attribute used for the group membership cross reference can be repeated, if you look at GroupFive there is also a reference to another user TestUserFive which is not shown here. General Group Searching Before looking at the examples for the two approaches shown above we first need to define the attributes common to both of these. group-name : This attribute is used to specify the form that should be used for the group name returned as the list of groups of which the user is a member. This can either be the simple form of the group name or the group's distinguished name. If the distinguished name is required this attribute can be set to DISTINGUISHED_NAME . Defaults to SIMPLE . iterative : This attribute is used to indicate if, after identifying the groups a user is a member of, we should also iteratively search based on the groups to identify which groups the groups are a member of. If iterative searching is enabled we keep going until either we reach a group that is not a member if any other groups or a cycle is detected. Defaults to false . Cyclic group membership is not a problem. A record of each search is kept to prevent groups that have already been searched from being searched again. Important For iterative searching to work the group entries need to look the same as user entries. The same approach used to identify the groups a user is a member of is then used to identify the groups of which the group is a member. This would not be possible if for group to group membership the name of the attribute used for the cross reference changes or if the direction of the reference changes. group-dn-attribute : On an entry for a group which attribute is its distinguished name. Defaults to dn . group-name-attribute : On an entry for a group which attribute is its simple name. Defaults to uid . Example 9.13. Principal to Group Example Configuration Based on the example LDIF from above here is an example configuration iteratively loading a user's groups where the attribute used to cross reference is the memberOf attribute on the user. The most important aspect of this configuration is that the principal-to-group element has been added with a single attribute. group-attribute : The name of the attribute on the user entry that matches the distinguished name of the group the user is a member of. Defaults to memberOf . Example 9.14. Group to Principal Example Configuration This example shows an iterative search for the group to principal LDIF example shown above. Here an element group-to-principal is added. This element is used to define how searches for groups that reference the user entry will be performed. The following attributes are set: base-dn : The distinguished name of the context to use to begin the search. recursive : Whether sub-contexts also be searched. Defaults to false . search-by : The form of the role name used in searches. Valid values are SIMPLE and DISTINGUISHED_NAME . Defaults to DISTINGUISHED_NAME . Within the group-to-principal element there is a membership-filter element to define the cross reference. principal-attribute : The name of the attribute on the group entry that references the user entry. Defaults to member . Report a bug 9.7.6. Hot Rod Interface Security 9.7.6.1. Publish Hot Rod Endpoints as a Public Interface Red Hat JBoss Data Grid's Hot Rod server operates as a management interface as a default. To extend its operations to a public interface, alter the value of the interface parameter in the socket-binding element from management to public as follows: Report a bug 9.7.6.2. Encryption of communication between Hot Rod Server and Hot Rod client Hot Rod can be encrypted using TLS/SSL, and has the option to require certificate-based client authentication. Use the following procedure to secure the Hot Rod connector using SSL. Procedure 9.3. Secure Hot Rod Using SSL/TLS Generate a Keystore Create a Java Keystore using the keytool application distributed with the JDK and add your certificate to it. The certificate can be either self signed, or obtained from a trusted CA depending on your security policy. Place the Keystore in the Configuration Directory Put the keystore in the ~/JDG_HOME/standalone/configuration directory with the standalone-hotrod-ssl.xml file from the ~/JDG_HOME/docs/examples/configs directory. Declare an SSL Server Identity Declare an SSL server identity within a security realm in the management section of the configuration file. The SSL server identity must specify the path to a keystore and its secret key. See Section 9.7.7.4, "Configure Hot Rod Authentication (X.509)" for details about these parameters. Add the Security Element Add the security element to the Hot Rod connector as follows: Server Authentication of Certificate If you require the server to perform authentication of the client certificate, create a truststore that contains the valid client certificates and set the require-ssl-client-auth attribute to true . Start the Server Start the server using the following: This will start a server with a Hot Rod endpoint on port 11222. This endpoint will only accept SSL connections. Securing Hot Rod using SSL can also be configured programmatically. Example 9.15. Secure Hot Rod Using SSL/TLS Important To prevent plain text passwords from appearing in configurations or source codes, plain text passwords should be changed to Vault passwords. For more information about how to set up Vault passwords, see the Red Hat Enterprise Application Platform Security Guide . Report a bug 9.7.6.3. Securing Hot Rod to LDAP Server using SSL When connecting to an LDAP server with SSL enabled it may be necessary to specify a trust store or key store containing the appropriate certificates. Section 9.7.6.2, "Encryption of communication between Hot Rod Server and Hot Rod client" describes how to set up SSL for Hot Rod client-server communication. This can be used, for example, for secure Hot Rod client authentication with PLAIN username/password. When the username/password is checked against credentials in LDAP, a secure connection from the Hot Rod server to the LDAP server is also required. To enable connection from the Hot Rod server to LDAP via SSL, a security realm must be defined as follows: Example 9.16. Hot Rod Client Authentication to LDAP Server Example 9.17. Hot Rod Client Authentication to LDAP Server Important To prevent plain text passwords from appearing in configurations or source codes, plain text passwords should be changed to Vault passwords. For more information about how to set up Vault passwords, see the Red Hat Enterprise Application Platform Security Guide . Report a bug 9.7.7. User Authentication over Hot Rod Using SASL User authentication over Hot Rod can be implemented using the following Simple Authentication and Security Layer (SASL) mechanisms: PLAIN is the least secure mechanism because credentials are transported in plain text format. However, it is also the simplest mechanism to implement. This mechanism can be used in conjunction with encryption ( SSL ) for additional security. DIGEST-MD5 is a mechanism than hashes the credentials before transporting them. As a result, it is more secure than the PLAIN mechanism. GSSAPI is a mechanism that uses Kerberos tickets. As a result, it requires a correctly configured Kerberos Domain Controller (for example, Microsoft Active Directory). EXTERNAL is a mechanism that obtains the required credentials from the underlying transport (for example, from a X.509 client certificate) and therefore requires client certificate encryption to work correctly. Report a bug 9.7.7.1. Configure Hot Rod Authentication (GSSAPI/Kerberos) Use the following steps to set up Hot Rod Authentication using the SASL GSSAPI/Kerberos mechanism: Procedure 9.4. Configure SASL GSSAPI/Kerberos Authentication Server-side Configuration The following steps must be configured on the server-side: Define a Kerberos security login module using the security domain subsystem: Ensure that the cache-container has authorization roles defined, and these roles are applied in the cache's authorization block as seen in Section 9.5, "Configuring Red Hat JBoss Data Grid for Authorization" . Configure a Hot Rod connector as follows: The server-name attribute specifies the name that the server declares to incoming clients. The client configuration must also contain the same server name value. The server-context-name attribute specifies the name of the login context used to retrieve a server subject for certain SASL mechanisms (for example, GSSAPI). The mechanisms attribute specifies the authentication mechanism in use. See Section 9.7.7, "User Authentication over Hot Rod Using SASL" for a list of supported mechanisms. The qop attribute specifies the SASL quality of protection value for the configuration. Supported values for this attribute are auth (authentication), auth-int (authentication and integrity, meaning that messages are verified against checksums to detect tampering), and auth-conf (authentication, integrity, and confidentiality, meaning that messages are also encrypted). Multiple values can be specified, for example, auth-int auth-conf . The ordering implies preference, so the first value which matches both the client and server's preference is chosen. The strength attribute specifies the SASL cipher strength. Valid values are low , medium , and high . The no-anonymous element within the policy element specifies whether mechanisms that accept anonymous login are permitted. Set this value to false to permit and true to deny. Client-side Configuration The following steps must be configured on the client-side: Define a login module in a login configuration file ( gss.conf ) on the client side: Set up the following system properties: Note The krb5.conf file is dependent on the environment and must point to the Kerberos Key Distribution Center. Configure the Hot Rod Client: Report a bug 9.7.7.2. Configure Hot Rod Authentication (MD5) Use the following steps to set up Hot Rod Authentication using the SASL using the MD5 mechanism: Procedure 9.5. Configure Hot Rod Authentication (MD5) Set up the Hot Rod Connector configuration by adding the sasl element to the authentication element (for details on the authentication element, see Section 9.7.4, "Configuring Security Realms Declaratively" ) as follows: The server-name attribute specifies the name that the server declares to incoming clients. The client configuration must also contain the same server name value. The mechanisms attribute specifies the authentication mechanism in use. See Section 9.7.7, "User Authentication over Hot Rod Using SASL" for a list of supported mechanisms. The qop attribute specifies the SASL quality of production value for the configuration. Supported values for this attribute are auth , auth-int , and auth-conf . Connect the client to the configured Hot Rod connector as follows: Report a bug 9.7.7.3. Configure Hot Rod Using LDAP/Active Directory Use the following to configure authentication over Hot Rod using LDAP or Microsoft Active Directory: The following are some details about the elements and parameters used in this configuration: The security-realm element's name parameter specifies the security realm to reference to use when establishing the connection. The authentication element contains the authentication details. The ldap element specifies how LDAP searches are used to authenticate a user. First, a connection to LDAP is established and a search is conducted using the supplied user name to identify the distinguished name of the user. A subsequent connection to the server is established using the password supplied by the user. If the second connection succeeds, the authentication is a success. The connection parameter specifies the name of the connection to use to connect to LDAP. The (optional) recursive parameter specifies whether the filter is executed recursively. The default value for this parameter is false . The base-dn parameter specifies the distinguished name of the context to use to begin the search from. The (optional) user-dn parameter specifies which attribute to read for the user's distinguished name after the user is located. The default value for this parameter is dn . The outbound-connections element specifies the name of the connection used to connect to the LDAP. directory. The ldap element specifies the properties of the outgoing LDAP connection. The name parameter specifies the unique name used to reference this connection. The url parameter specifies the URL used to establish the LDAP connection. The search-dn parameter specifies the distinguished name of the user to authenticate and to perform the searches. The search-credential parameter specifies the password required to connect to LDAP as the search-dn . The (optional) initial-context-factory parameter allows the overriding of the initial context factory. the default value of this parameter is com.sun.jndi.ldap.LdapCtxFactory . Report a bug 9.7.7.4. Configure Hot Rod Authentication (X.509) The X.509 certificate can be installed at the node, and be made available to other nodes for authentication purposes for inbound and outbound SSL connections. This is enabled using the <server-identities/> element of a security realm definition, which defines how a server appears to external applications. This element can be used to configure a password to be used when establishing a remote connection, as well as the loading of an X.509 key. The following example shows how to install an X.509 certificate on the node. In the provided example, the SSL element contains the <keystore/> element, which is used to define how to load the key from the file-based keystore. The following parameters ave available for this element. Table 9.4. <server-identities/> Options Parameter Mandatory/Optional Description path Mandatory This is the path to the keystore, this can be an absolute path or relative to the attribute. relative-to Optional The name of a service representing a path the keystore is relative to. keystore-password Mandatory The password required to open the keystore. alias Optional The alias of the entry to use from the keystore - for a keystore with multiple entries in practice the first usable entry is used but this should not be relied on and the alias should be set to guarantee which entry is used. key-password Optional The password to load the key entry, if omitted the keystore-password will be used instead. Note If the following error occurs, specify a key-password as well as an alias to ensure only one key is loaded. Report a bug
|
[
"/host=master/core-service=management/security-realm=MyDomainRealm:add()",
"/host=master/core-service=management/security-realm=MyDomainRealm/authentication=properties:add(path=myfile.properties)",
"<security-realms> <security-realm name=\"ManagementRealm\"> <authentication> <local default-user=\"USDlocal\" skip-group-loading=\"true\"/> <properties path=\"mgmt-users.properties\" relative-to=\"jboss.server.config.dir\"/> </authentication> <authorization map-groups-to-roles=\"false\"> <properties path=\"mgmt-groups.properties\" relative-to=\"jboss.server.config.dir\"/> </authorization> </security-realm> <security-realm name=\"ApplicationRealm\"> <authentication> <local default-user=\"USDlocal\" allowed-users=\"*\" skip-group-loading=\"true\"/> <properties path=\"application-users.properties\" relative-to=\"jboss.server.config.dir\"/> </authentication> <authorization> <properties path=\"application-roles.properties\" relative-to=\"jboss.server.config.dir\"/> </authorization> </security-realm> </security-realms>",
"<authorization> <ldap connection=\"...\"> <!-- OPTIONAL --> <username-to-dn force=\"true\"> <!-- Only one of the following. --> <username-is-dn /> <username-filter base-dn=\"...\" recursive=\"...\" user-dn-attribute=\"...\" attribute=\"...\" /> <advanced-filter base-dn=\"...\" recursive=\"...\" user-dn-attribute=\"...\" filter=\"...\" /> </username-to-dn> <group-search group-name=\"...\" iterative=\"...\" group-dn-attribute=\"...\" group-name-attribute=\"...\" > <!-- One of the following --> <group-to-principal base-dn=\"...\" recursive=\"...\" search-by=\"...\"> <membership-filter principal-attribute=\"...\" /> </group-to-principal> <principal-to-group group-attribute=\"...\" /> </group-search> </ldap> </authorization>",
"<username-to-dn force=\"false\"> <username-is-dn /> </username-to-dn>",
"<username-to-dn force=\"true\"> <username-filter base-dn=\"dc=people,dc=harold,dc=example,dc=com\" recursive=\"false\" attribute=\"sn\" user-dn-attribute=\"dn\" /> </username-to-dn>",
"<username-to-dn force=\"true\"> <advanced-filter base-dn=\"dc=people,dc=harold,dc=example,dc=com\" recursive=\"false\" filter=\"sAMAccountName={0}\" user-dn-attribute=\"dn\" /> </username-to-dn>",
"dn: uid=TestUserOne,ou=users,dc=principal-to-group,dc=example,dc=org objectClass: extensibleObject objectClass: top objectClass: groupMember objectClass: inetOrgPerson objectClass: uidObject objectClass: person objectClass: organizationalPerson cn: Test User One sn: Test User One uid: TestUserOne distinguishedName: uid=TestUserOne,ou=users,dc=principal-to-group,dc=example,dc=org memberOf: uid=GroupOne,ou=groups,dc=principal-to-group,dc=example,dc=org memberOf: uid=Slashy/Group,ou=groups,dc=principal-to-group,dc=example,dc=org userPassword:: e1NTSEF9WFpURzhLVjc4WVZBQUJNbEI3Ym96UVAva0RTNlFNWUpLOTdTMUE9PQ== dn: uid=GroupOne,ou=groups,dc=principal-to-group,dc=example,dc=org objectClass: extensibleObject objectClass: top objectClass: groupMember objectClass: group objectClass: uidObject uid: GroupOne distinguishedName: uid=GroupOne,ou=groups,dc=principal-to-group,dc=example,dc=org memberOf: uid=GroupFive,ou=subgroups,ou=groups,dc=principal-to-group,dc=example,dc=org dn: uid=GroupFive,ou=subgroups,ou=groups,dc=principal-to-group,dc=example,dc=org objectClass: extensibleObject objectClass: top objectClass: groupMember objectClass: group objectClass: uidObject uid: GroupFive distinguishedName: uid=GroupFive,ou=subgroups,ou=groups,dc=principal-to-group,dc=example,dc=org",
"dn: uid=TestUserOne,ou=users,dc=group-to-principal,dc=example,dc=org objectClass: top objectClass: inetOrgPerson objectClass: uidObject objectClass: person objectClass: organizationalPerson cn: Test User One sn: Test User One uid: TestUserOne userPassword:: e1NTSEF9SjR0OTRDR1ltaHc1VVZQOEJvbXhUYjl1dkFVd1lQTmRLSEdzaWc9PQ== dn: uid=GroupOne,ou=groups,dc=group-to-principal,dc=example,dc=org objectClass: top objectClass: groupOfUniqueNames objectClass: uidObject cn: Group One uid: GroupOne uniqueMember: uid=TestUserOne,ou=users,dc=group-to-principal,dc=example,dc=org dn: uid=GroupFive,ou=subgroups,ou=groups,dc=group-to-principal,dc=example,dc=org objectClass: top objectClass: groupOfUniqueNames objectClass: uidObject cn: Group Five uid: GroupFive uniqueMember: uid=TestUserFive,ou=users,dc=group-to-principal,dc=example,dc=org uniqueMember: uid=GroupOne,ou=groups,dc=group-to-principal,dc=example,dc=org",
"<group-search group-name=\"...\" iterative=\"...\" group-dn-attribute=\"...\" group-name-attribute=\"...\" > </group-search>",
"<authorization> <ldap connection=\"LocalLdap\"> <username-to-dn> <username-filter base-dn=\"ou=users,dc=principal-to-group,dc=example,dc=org\" recursive=\"false\" attribute=\"uid\" user-dn-attribute=\"dn\" /> </username-to-dn> <group-search group-name=\"SIMPLE\" iterative=\"true\" group-dn-attribute=\"dn\" group-name-attribute=\"uid\"> <principal-to-group group-attribute=\"memberOf\" /> </group-search> </ldap> </authorization>",
"<authorization> <ldap connection=\"LocalLdap\"> <username-to-dn> <username-filter base-dn=\"ou=users,dc=group-to-principal,dc=example,dc=org\" recursive=\"false\" attribute=\"uid\" user-dn-attribute=\"dn\" /> </username-to-dn> <group-search group-name=\"SIMPLE\" iterative=\"true\" group-dn-attribute=\"dn\" group-name-attribute=\"uid\"> <group-to-principal base-dn=\"ou=groups,dc=group-to-principal,dc=example,dc=org\" recursive=\"true\" search-by=\"DISTINGUISHED_NAME\"> <membership-filter principal-attribute=\"uniqueMember\" /> </group-to-principal> </group-search> </ldap> </authorization>",
"<socket-binding name=\"hotrod\" interface=\"public\" port=\"11222\" />",
"<server-identities> <ssl protocol=\"...\"> <keystore path=\"...\" relative-to=\"...\" keystore-password=\"USD{VAULT::VAULT_BLOCK::ATTRIBUTE_NAME::ENCRYPTED_VALUE}\" /> </ssl> <secret value=\"...\" /> </server-identities>",
"<hotrod-connector socket-binding=\"hotrod\" cache-container=\"local\"> <encryption ssl=\"true\" security-realm=\"ApplicationRealm\" require-ssl-client-auth=\"false\" /> </hotrod-connector>",
"bin/standalone.sh -c standalone-hotrod-ssl.xml",
"package org.infinispan.client.hotrod.configuration; import java.util.Arrays; import javax.net.ssl.KeyManager; import javax.net.ssl.SSLContext; import javax.net.ssl.TrustManager; public class SslConfiguration { private final boolean enabled; private final String keyStoreFileName; private final char[] VAULT::VAULT_BLOCK::ATTRIBUTE_NAME::keyStorePassword; private final SSLContext sslContext; private final String trustStoreFileName; private final char[] VAULT::VAULT_BLOCK::ATTRIBUTE_NAME::trustStorePassword; SslConfiguration(boolean enabled, String keyStoreFileName, char[] keyStorePassword, SSLContext sslContext, String trustStoreFileName, char[] trustStorePassword) { this.enabled = enabled; this.keyStoreFileName = keyStoreFileName; this.keyStorePassword = VAULT::VAULT_BLOCK::ATTRIBUTE_NAME::keyStorePassword; this.sslContext = sslContext; this.trustStoreFileName = trustStoreFileName; this.trustStorePassword = VAULT::VAULT_BLOCK::ATTRIBUTE_NAME::trustStorePassword; } public boolean enabled() { return enabled; } public String keyStoreFileName() { return keyStoreFileName; } public char[] keyStorePassword() { return keyStorePassword; } public SSLContext sslContext() { return sslContext; } public String trustStoreFileName() { return trustStoreFileName; } public char[] trustStorePassword() { return trustStorePassword; } @Override public String toString() { return \"SslConfiguration [enabled=\" + enabled + \", keyStoreFileName=\" + keyStoreFileName + \", sslContext=\" + sslContext + \", trustStoreFileName=\" + trustStoreFileName + \"]\"; } }",
"<management> <security-realms> <security-realm name=\"LdapSSLRealm\"> <authentication> <truststore path=\"ldap.truststore\" relative-to=\"jboss.server.config.dir\" keystore-password=USD{VAULT::VAULT_BLOCK::ATTRIBUTE_NAME::ENCRYPTED_VALUE} /> </authentication> </security-realm> </security-realms> <outbound-connections> <ldap name=\"LocalLdap\" url=\"ldaps://localhost:10389\" search-dn=\"uid=wildfly,dc=simple,dc=wildfly,dc=org\" search-credential=\"secret\" security-realm=\"LdapSSLRealm\" /> </outbound-connections> </management>",
"public class HotRodPlainAuthLdapOverSslIT extends HotRodSaslAuthTestBase { private static final String KEYSTORE_NAME = \"keystore_client.jks\"; private static final String KEYSTORE_PASSWORD = \"VAULT::VAULT_BLOCK::ATTRIBUTE_NAME::ENCRYPTED_VALUE\"; private static ApacheDsLdap ldap; @InfinispanResource(\"hotrodAuthLdapOverSsl\") private RemoteInfinispanServer server; @BeforeClass public static void kerberosSetup() throws Exception { ldap = new ApacheDSLdapSSL(\"localhost\", ITestUtils.SERVER_CONFIG_DIR + File.separator + KEYSTORE_NAME, KEYSTORE_PASSWORD); ldap.start(); } @AfterClass public static void ldapTearDown() throws Exception { ldap.stop(); } @Override public String getTestedMech() { return \"PLAIN\"; } @Override public RemoteInfinispanServer getRemoteServer() { return server; } @Override public void initAsAdmin() { initializeOverSsl(ADMIN_LOGIN, ADMIN_PASSWD); } @Override public void initAsReader() { initializeOverSsl(READER_LOGIN, READER_PASSWD); } @Override public void initAsWriter() { initializeOverSsl(WRITER_LOGIN, WRITER_PASSWD); } @Override public void initAsSupervisor() { initializeOverSsl(SUPERVISOR_LOGIN, SUPERVISOR_PASSWD); } }",
"<system-properties> <property name=\"java.security.krb5.conf\" value=\"/tmp/infinispan/krb5.conf\"/> <property name=\"java.security.krb5.debug\" value=\"true\"/> <property name=\"jboss.security.disable.secdomain.option\" value=\"true\"/> </system-properties> <security-domain name=\"infinispan-server\" cache-type=\"default\"> <authentication> <login-module code=\"Kerberos\" flag=\"required\"> <module-option name=\"debug\" value=\"true\"/> <module-option name=\"storeKey\" value=\"true\"/> <module-option name=\"refreshKrb5Config\" value=\"true\"/> <module-option name=\"useKeyTab\" value=\"true\"/> <module-option name=\"doNotPrompt\" value=\"true\"/> <module-option name=\"keyTab\" value=\"/tmp/infinispan/infinispan.keytab\"/> <module-option name=\"principal\" value=\"HOTROD/[email protected]\"/> </login-module> </authentication> </security-domain>",
"<hotrod-connector socket-binding=\"hotrod\" cache-container=\"default\"> <authentication security-realm=\"ApplicationRealm\"> <sasl server-name=\"node0\" mechanisms=\"{mechanism_name}\" qop=\"{qop_name}\" strength=\"{value}\"> <policy> <no-anonymous value=\"true\" /> </policy> <property name=\"com.sun.security.sasl.digest.utf8\">true</property> </sasl> </authentication> </hotrod-connector>",
"GssExample { com.sun.security.auth.module.Krb5LoginModule required client=TRUE; };",
"java.security.auth.login.config=gss.conf java.security.krb5.conf=/etc/krb5.conf",
"public class MyCallbackHandler implements CallbackHandler { final private String username; final private char[] password; final private String realm; public MyCallbackHandler() { } public MyCallbackHandler (String username, String realm, char[] password) { this.username = username; this.password = password; this.realm = realm; } @Override public void handle(Callback[] callbacks) throws IOException, UnsupportedCallbackException { for (Callback callback : callbacks) { if (callback instanceof NameCallback) { NameCallback nameCallback = (NameCallback) callback; nameCallback.setName(username); } else if (callback instanceof PasswordCallback) { PasswordCallback passwordCallback = (PasswordCallback) callback; passwordCallback.setPassword(password); } else if (callback instanceof AuthorizeCallback) { AuthorizeCallback authorizeCallback = (AuthorizeCallback) callback; authorizeCallback.setAuthorized(authorizeCallback.getAuthenticationID().equals( authorizeCallback.getAuthorizationID())); } else if (callback instanceof RealmCallback) { RealmCallback realmCallback = (RealmCallback) callback; realmCallback.setText(realm); } else { throw new UnsupportedCallbackException(callback); } } }} LoginContext lc = new LoginContext(\"GssExample\", new MyCallbackHandler(\"krb_user\", \"krb_password\".toCharArra()));lc.login();Subject clientSubject = lc.getSubject(); ConfigurationBuilder clientBuilder = new ConfigurationBuilder();clientBuilder .addServer() .host(\"127.0.0.1\") .port(11222) .socketTimeout(1200000) .security() .authentication() .enable() .serverName(\"infinispan-server\") .saslMechanism(\"GSSAPI\") .clientSubject(clientSubject) .callbackHandler(new MyCallbackHandler());remoteCacheManager = new RemoteCacheManager(clientBuilder.build());RemoteCache<String, String> cache = remoteCacheManager.getCache(\"secured\");",
"<hotrod-connector socket-binding=\"hotrod\" cache-container=\"default\"> <authentication security-realm=\"ApplicationRealm\"> <sasl server-name=\"myhotrodserver\" mechanisms=\"DIGEST-MD5\" qop=\"auth\" /> </authentication> </hotrod-connector>",
"public class MyCallbackHandler implements CallbackHandler { final private String username; final private char[] password; final private String realm; public MyCallbackHandler (String username, String realm, char[] password) { this.username = username; this.password = password; this.realm = realm; } @Override public void handle(Callback[] callbacks) throws IOException, UnsupportedCallbackException { for (Callback callback : callbacks) { if (callback instanceof NameCallback) { NameCallback nameCallback = (NameCallback) callback; nameCallback.setName(username); } else if (callback instanceof PasswordCallback) { PasswordCallback passwordCallback = (PasswordCallback) callback; passwordCallback.setPassword(password); } else if (callback instanceof AuthorizeCallback) { AuthorizeCallback authorizeCallback = (AuthorizeCallback) callback; authorizeCallback.setAuthorized(authorizeCallback.getAuthenticationID().equals( authorizeCallback.getAuthorizationID())); } else if (callback instanceof RealmCallback) { RealmCallback realmCallback = (RealmCallback) callback; realmCallback.setText(realm); } else { throw new UnsupportedCallbackException(callback); } } }} ConfigurationBuilder clientBuilder = new ConfigurationBuilder();clientBuilder .addServer() .host(\"127.0.0.1\") .port(11222) .socketTimeout(1200000) .security() .authentication() .enable() .serverName(\"myhotrodserver\") .saslMechanism(\"DIGEST-MD5\") .callbackHandler(new MyCallbackHandler(\"myuser\", \"ApplicationRealm\", \"qwer1234!\".toCharArray()));remoteCacheManager = new RemoteCacheManager(clientBuilder.build());RemoteCache<String, String> cache = remoteCacheManager.getCache(\"secured\");",
"<security-realms> <security-realm name=\"ApplicationRealm\"> <authentication> <ldap connection=\"ldap_connection\" recursive=\"true\" base-dn=\"cn=users,dc=infinispan,dc=org\"> <username-filter attribute=\"cn\" /> </ldap> </authentication> </security-realm> </security-realms> <outbound-connections> <ldap name=\"ldap_connection\" url=\"ldap://my_ldap_server\" search-dn=\"CN=test,CN=Users,DC=infinispan,DC=org\" search-credential=\"Test_password\"/> </outbound-connections>",
"<security-realm name=\"ApplicationRealm\"> <server-identities> <ssl protocol=\"...\"> <keystore path=\"...\" relative-to=\"...\" keystore-password=\"...\" alias=\"...\" key-password=\"...\" /> </ssl> </server-identities> [... authentication/authorization ...] </security-realms>",
"UnrecoverableKeyException: Cannot recover key"
] |
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/developer_guide/sect-data_security_for_remote_client_server_mode
|
Chapter 36. Using Wild Card Types
|
Chapter 36. Using Wild Card Types Abstract There are instances when a schema author wants to defer binding elements or attributes to a defined type. For these cases, XML Schema provides three mechanisms for specifying wild card place holders. These are all mapped to Java in ways that preserve their XML Schema functionality. 36.1. Using Any Elements Overview The XML Schema any element is used to create a wild card place holder in complex type definitions. When an XML element is instantiated for an XML Schema any element, it can be any valid XML element. The any element does not place any restrictions on either the content or the name of the instantiated XML element. For example, given the complex type defined in Example 36.1, "XML Schema Type Defined with an Any Element" you can instantiate either of the XML elements shown in Example 36.2, "XML Document with an Any Element" . Example 36.1. XML Schema Type Defined with an Any Element Example 36.2. XML Document with an Any Element XML Schema any elements are mapped to either a Java Object object or a Java org.w3c.dom.Element object. Specifying in XML Schema The any element can be used when defining sequence complex types and choice complex types. In most cases, the any element is an empty element. It can, however, take an annotation element as a child. Table 36.1, "Attributes of the XML Schema Any Element" describes the any element's attributes. Table 36.1. Attributes of the XML Schema Any Element Attribute Description namespace Specifies the namespace of the elements that can be used to instantiate the element in an XML document. The valid values are: ##any Specifies that elements from any namespace can be used. This is the default. ##other Specifies that elements from any namespace other than the parent element's namespace can be used. ##local Specifies elements without a namespace must be used. ##targetNamespace Specifies that elements from the parent element's namespace must be used. A space delimited list of URIs #\#local and \#\#targetNamespace Specifies that elements from any of the listed namespaces can be used. maxOccurs Specifies the maximum number of times an instance of the element can appear in the parent element. The default value is 1 . To specify that an instance of the element can appear an unlimited number of times, you can set the attribute's value to unbounded . minOccurs Specifies the minimum number of times an instance of the element can appear in the parent element. The default value is 1 . processContents Specifies how the element used to instantiate the any element should be validated. Valid values are: strict Specifies that the element must be validated against the proper schema. This is the default value. lax Specifies that the element should be validated against the proper schema. If it cannot be validated, no errors are thrown. skip Specifies that the element should not be validated. Example 36.3, "Complex Type Defined with an Any Element" shows a complex type defined with an any element Example 36.3. Complex Type Defined with an Any Element Mapping to Java XML Schema any elements result in the creation of a Java property named any . The property has associated getter and setter methods. The type of the resulting property depends on the value of the element's processContents attribute. If the any element's processContents attribute is set to skip , the element is mapped to a org.w3c.dom.Element object. For all other values of the processContents attribute an any element is mapped to a Java Object object. The generated property is decorated with the @XmlAnyElement annotation. This annotation has an optional lax property that instructs the runtime what to do when marshaling the data. Its default value is false which instructs the runtime to automatically marshal the data into a org.w3c.dom.Element object. Setting lax to true instructs the runtime to attempt to marshal the data into JAXB types. When the any element's processContents attribute is set to skip , the lax property is set to its default value. For all other values of the processContents attribute, lax is set to true . Example 36.4, "Java Class with an Any Element" shows how the complex type defined in Example 36.3, "Complex Type Defined with an Any Element" is mapped to a Java class. Example 36.4. Java Class with an Any Element Marshalling If the Java property for an any element has its lax set to false , or the property is not specified, the runtime makes no attempt to parse the XML data into JAXB objects. The data is always stored in a DOM Element object. If the Java property for an any element has its lax set to true , the runtime attempts to marshal the XML data into the appropriate JAXB objects. The runtime attempts to identify the proper JAXB classes using the following procedure: It checks the element tag of the XML element against the list of elements known to the runtime. If it finds a match, the runtime marshals the XML data into the proper JAXB class for the element. It checks the XML element's xsi:type attribute. If it finds a match, the runtime marshals the XML element into the proper JAXB class for that type. If it cannot find a match it marshals the XML data into a DOM Element object. Usually an application's runtime knows about all of the types generated from the schema's included in its contract. This includes the types defined in the contract's wsdl:types element, any data types added to the contract through inclusion, and any types added to the contract through importing other schemas. You can also make the runtime aware of additional types using the @XmlSeeAlso annotation which is described in Section 32.4, "Adding Classes to the Runtime Marshaller" . Unmarshalling If the Java property for an any element has its lax set to false , or the property is not specified, the runtime will only accept DOM Element objects. Attempting to use any other type of object will result in a marshalling error. If the Java property for an any element has its lax set to true , the runtime uses its internal map between Java data types and the XML Schema constructs they represent to determine the XML structure to write to the wire. If the runtime knows the class and can map it to an XML Schema construct, it writes out the data and inserts an xsi:type attribute to identify the type of data the element contains. If the runtime cannot map the Java object to a known XML Schema construct, it will throw a marshaling exception. You can add types to the runtime's map using the @XmlSeeAlso annotation which is described in Section 32.4, "Adding Classes to the Runtime Marshaller" . 36.2. Using the XML Schema anyType Type Overview The XML Schema type xsd:anyType is the root type for all XML Schema types. All of the primitives are derivatives of this type, as are all user defined complex types. As a result, elements defined as being of xsd:anyType can contain data in the form of any of the XML Schema primitives as well as any complex type defined in a schema document. In Java the closest matching type is the Object class. It is the class from which all other Java classes are sub-typed. Using in XML Schema You use the xsd:anyType type as you would any other XML Schema complex type. It can be used as the value of an element element's type element. It can also be used as the base type from which other types are defined. Example 36.5, "Complex Type with a Wild Card Element" shows an example of a complex type that contains an element of type xsd:anyType . Example 36.5. Complex Type with a Wild Card Element Mapping to Java Elements that are of type xsd:anyType are mapped to Object objects. Example 36.6, "Java Representation of a Wild Card Element" shows the mapping of Example 36.5, "Complex Type with a Wild Card Element" to a Java class. Example 36.6. Java Representation of a Wild Card Element This mapping allows you to place any data into the property representing the wild card element. The Apache CXF runtime handles the marshaling and unmarshaling of the data into usable Java representation. Marshalling When Apache CXF marshals XML data into Java types, it attempts to marshal anyType elements into known JAXB objects. To determine if it is possible to marshal an anyType element into a JAXB generated object, the runtime inspects the element's xsi:type attribute to determine the actual type used to construct the data in the element. If the xsi:type attribute is not present, the runtime attempts to identify the element's actual data type by introspection. If the element's actual data type is determined to be one of the types known by the application's JAXB context, the element is marshaled into a JAXB object of the proper type. If the runtime cannot determine the actual data type of the element, or the actual data type of the element is not a known type, the runtime marshals the content into a org.w3c.dom.Element object. You will then need to work with the element's content using the DOM APis. An application's runtime usually knows about all of the types generated from the schema's included in its contract. This includes the types defined in the contract's wsdl:types element, any data types added to the contract through inclusion, and any types added to the contract through importing other schema documents. You can also make the runtime aware of additional types using the @XmlSeeAlso annotation which is described in Section 32.4, "Adding Classes to the Runtime Marshaller" . Unmarshalling When Apache CXF unmarshals Java types into XML data, it uses an internal map between Java data types and the XML Schema constructs they represent to determine the XML structure to write to the wire. If the runtime knows the class and can map the class to an XML Schema construct, it writes out the data and inserts an xsi:type attribute to identify the type of data the element contains. If the data is stored in a org.w3c.dom.Element object, the runtime writes the XML structure represented by the object but it does not include an xsi:type attribute. If the runtime cannot map the Java object to a known XML Schema construct, it throws a marshaling exception. You can add types to the runtime's map using the @XmlSeeAlso annotation which is described in Section 32.4, "Adding Classes to the Runtime Marshaller" . 36.3. Using Unbound Attributes Overview XML Schema has a mechanism that allows you to leave a place holder for an arbitrary attribute in a complex type definition. Using this mechanism, you can define a complex type that can have any attribute. For example, you can create a type that defines the elements <robot name="epsilon" />, <robot age="10000" />, or <robot type="weevil" /> without specifying the three attributes. This can be particularly useful when flexibility in your data is required. Defining in XML Schema Undeclared attributes are defined in XML Schema using the anyAttribute element. It can be used wherever an attribute element can be used. The anyAttribute element has no attributes, as shown in Example 36.7, "Complex Type with an Undeclared Attribute" . Example 36.7. Complex Type with an Undeclared Attribute The defined type, arbitter , has two elements and can have one attribute of any type. The elements three elements shown in Example 36.8, "Examples of Elements Defined with a Wild Card Attribute" can all be generated from the complex type arbitter . Example 36.8. Examples of Elements Defined with a Wild Card Attribute Mapping to Java When a complex type containing an anyAttribute element is mapped to Java, the code generator adds a member called otherAttributes to the generated class. otherAttributes is of type java.util.Map<QName, String> and it has a getter method that returns a live instance of the map. Because the map returned from the getter is live, any modifications to the map are automatically applied. Example 36.9, "Class for a Complex Type with an Undeclared Attribute" shows the class generated for the complex type defined in Example 36.7, "Complex Type with an Undeclared Attribute" . Example 36.9. Class for a Complex Type with an Undeclared Attribute Working with undeclared attributes The otherAttributes member of the generated class expects to be populated with a Map object. The map is keyed using QNames . Once you get the map , you can access any attributes set on the object and set new attributes on the object. Example 36.10, "Working with Undeclared Attributes" shows sample code for working with undeclared attributes. Example 36.10. Working with Undeclared Attributes The code in Example 36.10, "Working with Undeclared Attributes" does the following: Gets the map containing the undeclared attributes. Creates QNames to work with the attributes. Sets the values for the attributes into the map. Retrieves the value for one of the attributes.
|
[
"<element name=\"FlyBoy\"> <complexType> <sequence> <any /> <element name=\"rank\" type=\"xsd:int\" /> </sequence> </complexType> </element>",
"<FlyBoy> <learJet>CL-215</learJet> <rank>2</rank> </element> <FlyBoy> <viper>Mark II</viper> <rank>1</rank> </element>",
"<complexType name=\"surprisePackage\"> <sequence> <any processContents=\"lax\" /> <element name=\"to\" type=\"xsd:string\" /> <element name=\"from\" type=\"xsd:string\" /> </sequence> </complexType>",
"public class SurprisePackage { @XmlAnyElement(lax = true) protected Object any; @XmlElement(required = true) protected String to; @XmlElement(required = true) protected String from; public Object getAny() { return any; } public void setAny(Object value) { this.any = value; } public String getTo() { return to; } public void setTo(String value) { this.to = value; } public String getFrom() { return from; } public void setFrom(String value) { this.from = value; } }",
"<complexType name=\"wildStar\"> <sequence> <element name=\"name\" type=\"xsd:string\" /> <element name=\"ship\" type=\"xsd:anyType\" /> </sequence> </complexType>",
"public class WildStar { @XmlElement(required = true) protected String name; @XmlElement(required = true) protected Object ship; public String getName() { return name; } public void setName(String value) { this.name = value; } public Object getShip() { return ship; } public void setShip(Object value) { this.ship = value; } }",
"<complexType name=\"arbitter\"> <sequence> <element name=\"name\" type=\"xsd:string\" /> <element name=\"rate\" type=\"xsd:float\" /> </sequence> <anyAttribute /> </complexType>",
"<officer rank=\"12\"><name>...</name><rate>...</rate></officer> <lawyer type=\"divorce\"><name>...</name><rate>...</rate></lawyer> <judge><name>...</name><rate>...</rate></judge>",
"public class Arbitter { @XmlElement(required = true) protected String name; protected float rate; @XmlAnyAttribute private Map<QName, String> otherAttributes = new HashMap<QName, String>(); public String getName() { return name; } public void setName(String value) { this.name = value; } public float getRate() { return rate; } public void setRate(float value) { this.rate = value; } public Map<QName, String> getOtherAttributes() { return otherAttributes; } }",
"Arbitter judge = new Arbitter(); Map<QName, String> otherAtts = judge.getOtherAttributes(); QName at1 = new QName(\"test.apache.org\", \"house\"); QName at2 = new QName(\"test.apache.org\", \"veteran\"); otherAtts.put(at1, \"Cape\"); otherAtts.put(at2, \"false\"); String vetStatus = otherAtts.get(at2);"
] |
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_cxf_development_guide/jaxwswildcardtypemapping
|
3.6. Users and Roles
|
3.6. Users and Roles 3.6.1. Introduction to Users In Red Hat Virtualization, there are two types of user domains: local domain and external domain. A default local domain called the internal domain and a default user admin is created during the the Manager installation process. You can create additional users on the internal domain using ovirt-aaa-jdbc-tool . User accounts created on local domains are known as local users. You can also attach external directory servers such as Red Hat Directory Server, Active Directory, OpenLDAP, and many other supported options to your Red Hat Virtualization environment and use them as external domains. User accounts created on external domains are known as directory users. Both local users and directory users need to be assigned with appropriate roles and permissions through the Administration Portal before they can function in the environment. There are two main types of user roles: end user and administrator. An end user role uses and manages virtual resources from the VM Portal. An administrator role maintains the system infrastructure using the Administration Portal. The roles can be assigned to the users for individual resources like virtual machines and hosts, or on a hierarchy of objects like clusters and data centers. 3.6.2. Introduction to Directory Servers During installation, Red Hat Virtualization Manager creates an admin user on the internal domain. The user is also referred to as admin@internal . This account is intended for use when initially configuring the environment and for troubleshooting. After you have attached an external directory server, added the directory users, and assigned them with appropriate roles and permissions, the admin@internal user can be disabled if it is not required.The directory servers supported are: 389ds 389ds RFC-2307 Schema Active Directory IBM Security Directory Server IBM Security Directory Server RFC-2307 Schema FreeIPA iDM Novell eDirectory RFC-2307 Schema OpenLDAP RFC-2307 Schema OpenLDAP Standard Schema Oracle Unified Directory RFC-2307 Schema RFC-2307 Schema (Generic) Red Hat Directory Server (RHDS) Red Hat Directory Server (RHDS) RFC-2307 Schema iPlanet Important It is not possible to install Red Hat Virtualization Manager ( rhevm ) and IdM ( ipa-server ) on the same system. IdM is incompatible with the mod_ssl package, which is required by Red Hat Virtualization Manager. Important If you are using Active Directory as your directory server, and you want to use sysprep in the creation of templates and virtual machines, then the Red Hat Virtualization administrative user must be delegated control over the Domain to: Join a computer to the domain Modify the membership of a group For information on creation of user accounts in Active Directory, see Create a New User Account . For information on delegation of control in Active Directory, see Delegate Control of an Organizational Unit . 3.6.3. Configuring an External LDAP Provider 3.6.3.1. Configuring an External LDAP Provider (Interactive Setup) Note The ovirt-engine-extension-aaa-ldap is deprecated. For new installations, use Red Hat Single Sign On. For more information, see Installing and Configuring Red Hat Single Sign-On in the Administration Guide . The ovirt-engine-extension-aaa-ldap extension allows users to customize their external directory setup easily. The ovirt-engine-extension-aaa-ldap extension supports many different LDAP server types, and an interactive setup script is provided to assist you with the setup for most LDAP types. If the LDAP server type is not listed in the interactive setup script, or you want to do more customization, you can manually edit the configuration files. See Configuring an External LDAP Provider for more information. For an Active Directory example, see Attaching an Active Directory . Prerequisites You must know the domain name of the DNS or the LDAP server. To set up secure connection between the LDAP server and the Manager, ensure that a PEM-encoded CA certificate has been prepared. Have at least one set of account name and password ready to perform search and login queries to the LDAP server. Procedure On the Red Hat Virtualization Manager, install the LDAP extension package: # dnf install ovirt-engine-extension-aaa-ldap-setup Run ovirt-engine-extension-aaa-ldap-setup to start the interactive setup: # ovirt-engine-extension-aaa-ldap-setup Select an LDAP type by entering the corresponding number. If you are not sure which schema your LDAP server is, select the standard schema of your LDAP server type. For Active Directory, follow the procedure at Attaching_an_Active_Directory . Available LDAP implementations: 1 - 389ds 2 - 389ds RFC-2307 Schema 3 - Active Directory 4 - IBM Security Directory Server 5 - IBM Security Directory Server RFC-2307 Schema 6 - IPA 7 - Novell eDirectory RFC-2307 Schema 8 - OpenLDAP RFC-2307 Schema 9 - OpenLDAP Standard Schema 10 - Oracle Unified Directory RFC-2307 Schema 11 - RFC-2307 Schema (Generic) 12 - RHDS 13 - RHDS RFC-2307 Schema 14 - iPlanet Please select: Press Enter to accept the default and configure domain name resolution for your LDAP server name: It is highly recommended to use DNS resolution for LDAP server. If for some reason you intend to use hosts or plain address disable DNS usage. Use DNS (Yes, No) [Yes]: Select a DNS policy method: For option 1, the DNS servers listed in /etc/resolv.conf are used to resolve the IP address. Check that the /etc/resolv.conf file is updated with the correct DNS servers. For option 2, enter the fully qualified domain name (FQDN) or the IP address of the LDAP server. You can use the dig command with the SRV record to find out the domain name. An SRV record takes the following format: _service._protocol . domain_name Example: dig _ldap._tcp.redhat.com SRV . For option 3, enter a space-separated list of LDAP servers. Use either the FQDN or IP address of the servers. This policy provides load-balancing between the LDAP servers. Queries are distributed among all LDAP servers according to the round-robin algorithm. For option 4, enter a space-separated list of LDAP servers. Use either the FQDN or IP address of the servers. This policy defines the first LDAP server to be the default LDAP server to respond to queries. If the first server is not available, the query will go to the LDAP server on the list. 1 - Single server 2 - DNS domain LDAP SRV record 3 - Round-robin between multiple hosts 4 - Failover between multiple hosts Please select: Select the secure connection method your LDAP server supports and specify the method to obtain a PEM-encoded CA certificate: File allows you to provide the full path to the certificate. URL allows you to specify a URL for the certificate. Inline allows you to paste the content of the certificate in the terminal. System allows you to specify the default location for all CA files. Insecure skips certificate validation, but the connection is still encrypted using TLS. NOTE: It is highly recommended to use secure protocol to access the LDAP server. Protocol startTLS is the standard recommended method to do so. Only in cases in which the startTLS is not supported, fallback to non standard ldaps protocol. Use plain for test environments only. Please select protocol to use (startTLS, ldaps, plain) [startTLS]: startTLS Please select method to obtain PEM encoded CA certificate (File, URL, Inline, System, Insecure): Please enter the password: Note LDAPS stands for Lightweight Directory Access Protocol Over Secure Socket Links. For SSL connections, select the ldaps option. Enter the search user distinguished name (DN). The user must have permissions to browse all users and groups on the directory server. The search user must be specified in LDAP annotation. If anonymous search is allowed, press Enter without any input. Enter search user DN (for example uid=username,dc=example,dc=com or leave empty for anonymous): uid=user1,ou=Users,ou=department-1,dc=example,dc=com Enter search user password: Enter the base DN: Please enter base DN (dc=redhat,dc=com) [dc=redhat,dc=com]: ou=department-1,dc=redhat,dc=com Select Yes if you intend to configure single sign-on for virtual machines. Note that the feature cannot be used with single sign-on to the Administration Portal feature. The script reminds you that the profile name must match the domain name. You will still need to follow the instructions in Configuring Single Sign-On for Virtual Machines in the Virtual Machine Management Guide . Are you going to use Single Sign-On for Virtual Machines (Yes, No) [Yes]: Specify a profile name. The profile name is visible to users on the login page. This example uses redhat.com . Note To rename the profile after the domain has been configured, edit the ovirt.engine.aaa.authn.profile.name attribute in the /etc/ovirt-engine/extensions.d/ redhat.com -authn.properties file. Restart the ovirt-engine service for the changes to take effect. Please specify profile name that will be visible to users: redhat.com Figure 3.1. The Administration Portal Login Page Note Users must select the profile from the drop-down list when logging in for the first time. The information is stored in browser cookies and preselected the time the user logs in. Test the login function to ensure your LDAP server is connected to your Red Hat Virtualization environment properly. For the login query, enter your user name and password : NOTE: It is highly recommended to test drive the configuration before applying it into engine. Login sequence is executed automatically, but it is recommended to also execute Search sequence manually after successful Login sequence. Please provide credentials to test login flow: Enter user name: Enter user password: [ INFO ] Executing login sequence... ... [ INFO ] Login sequence executed successfully Check that the user details are correct. If the user details are incorrect, select Abort : Please make sure that user details are correct and group membership meets expectations (search for PrincipalRecord and GroupRecord titles). Abort if output is incorrect. Select test sequence to execute (Done, Abort, Login, Search) [Abort]: Manually testing the Search function is recommended. For the search query, select Principal for user accounts or Group for group accounts. Select Yes to Resolve Groups if you want the group account information for the user account to be returned. Three configuration files are created and displayed in the screen output. Select test sequence to execute (Done, Abort, Login, Search) [Search]: Search Select entity to search (Principal, Group) [Principal]: Term to search, trailing '*' is allowed: testuser1 Resolve Groups (Yes, No) [No]: Select Done to complete the setup: Select test sequence to execute (Done, Abort, Login, Search) [Abort]: Done [ INFO ] Stage: Transaction setup [ INFO ] Stage: Misc configuration [ INFO ] Stage: Package installation [ INFO ] Stage: Misc configuration [ INFO ] Stage: Transaction commit [ INFO ] Stage: Closing up CONFIGURATION SUMMARY Profile name is: redhat.com The following files were created: /etc/ovirt-engine/aaa/redhat.com.properties /etc/ovirt-engine/extensions.d/redhat.com.properties /etc/ovirt-engine/extensions.d/redhat.com-authn.properties [ INFO ] Stage: Clean up Log file is available at /tmp/ovirt-engine-extension-aaa-ldap-setup-20171004101225-mmneib.log: [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination Restart the ovirt-engine service. The profile you have created is now available on the Administration Portal and the VM Portal login pages. To assign the user accounts on the LDAP server appropriate roles and permissions, for example, to log in to the VM Portal, see Manager User Tasks . # systemctl restart ovirt-engine.service Note For more information, see the LDAP authentication and authorization extension README file at /usr/share/doc/ovirt-engine-extension-aaa-ldap- version . 3.6.3.2. Attaching an Active Directory Note The ovirt-engine-extension-aaa-ldap is deprecated. For new installations, use Red Hat Single Sign On. For more information, see Installing and Configuring Red Hat Single Sign-On in the Administration Guide . Prerequisites You need to know the Active Directory forest name. The forest name is also known as the root domain name. Note Examples of the most common Active Directory configurations, which cannot be configured using the ovirt-engine-extension-aaa-ldap-setup tool, are provided in /usr/share/ovirt-engine-extension-aaa-ldap/examples/README.md . You need to either add the DNS server that can resolve the Active Directory forest name to the /etc/resolv.conf file on the Manager, or note down the Active Directory DNS servers and enter them when prompted by the interactive setup script. To set up secure connection between the LDAP server and the Manager, ensure a PEM-encoded CA certificate has been prepared. See Setting Up SSL or TLS Connections between the Manager and an LDAP Server for more information. Unless anonymous search is supported, a user with permissions to browse all users and groups must be available on the Active Directory to be used as the search user. Note down the search user's distinguished name (DN). Do not use the administrative user for the Active Directory. You must have at least one account name and password ready to perform search and login queries to the Active Directory. If your Active Directory deployment spans multiple domains, be aware of the limitation described in the /usr/share/ovirt-engine-extension-aaa-ldap/profiles/ad.properties file. Procedure On the Red Hat Virtualization Manager, install the LDAP extension package: # dnf install ovirt-engine-extension-aaa-ldap-setup Run ovirt-engine-extension-aaa-ldap-setup to start the interactive setup: # ovirt-engine-extension-aaa-ldap-setup Select an LDAP type by entering the corresponding number. The LDAP-related questions after this step are different for different LDAP types. Available LDAP implementations: 1 - 389ds 2 - 389ds RFC-2307 Schema 3 - Active Directory 4 - IBM Security Directory Server 5 - IBM Security Directory Server RFC-2307 Schema 6 - IPA 7 - Novell eDirectory RFC-2307 Schema 8 - OpenLDAP RFC-2307 Schema 9 - OpenLDAP Standard Schema 10 - Oracle Unified Directory RFC-2307 Schema 11 - RFC-2307 Schema (Generic) 12 - RHDS 13 - RHDS RFC-2307 Schema 14 - iPlanet Please select: 3 Enter the Active Directory forest name. If the forest name is not resolvable by your Manager's DNS, the script prompts you to enter a space-separated list of Active Directory DNS server names. Please enter Active Directory Forest name: ad-example.redhat.com [ INFO ] Resolving Global Catalog SRV record for ad-example.redhat.com [ INFO ] Resolving LDAP SRV record for ad-example.redhat.com Select the secure connection method your LDAP server supports and specify the method to obtain a PEM-encoded CA certificate. The file option allows you to provide the full path to the certificate. The URL option allows you to specify a URL to the certificate. Use the inline option to paste the content of the certificate in the terminal. The system option allows you to specify the location for all CA files. The insecure option allows you to use startTLS in insecure mode. NOTE: It is highly recommended to use secure protocol to access the LDAP server. Protocol startTLS is the standard recommended method to do so. Only in cases in which the startTLS is not supported, fallback to non standard ldaps protocol. Use plain for test environments only. Please select protocol to use (startTLS, ldaps, plain) [startTLS]: startTLS Please select method to obtain PEM encoded CA certificate (File, URL, Inline, System, Insecure): File Please enter the password: Note LDAPS stands for Lightweight Directory Access Protocol Over Secure Socket Links. For SSL connections, select the ldaps option. For more information on creating a PEM-encoded CA certificate, see Setting Up SSL or TLS Connections between the Manager and an LDAP Server . Enter the search user distinguished name (DN). The user must have permissions to browse all users and groups on the directory server. The search user must be of LDAP annotation. If anonymous search is allowed, press Enter without any input. Enter search user DN (empty for anonymous): cn=user1,ou=Users,dc=test,dc=redhat,dc=com Enter search user password: Specify whether to use single sign-on for virtual machines. This feature is enabled by default, but cannot be used if single sign-on to the Administration Portal is enabled. The script reminds you that the profile name must match the domain name. You will still need to follow the instructions in Configuring Single Sign-On for Virtual Machines in the Virtual Machine Management Guide . Are you going to use Single Sign-On for Virtual Machines (Yes, No) [Yes]: Specify a profile name. The profile name is visible to users on the login page. This example uses redhat.com . Please specify profile name that will be visible to users: redhat.com Figure 3.2. The Administration Portal Login Page Note Users need to select the desired profile from the drop-down list when logging in for the first time. The information is then stored in browser cookies and preselected the time the user logs in. Test the search and login function to ensure your LDAP server is connected to your Red Hat Virtualization environment properly. For the login query, enter the account name and password. For the search query, select Principal for user accounts, and select Group for group accounts. Enter Yes to Resolve Groups if you want the group account information for the user account to be returned. Select Done to complete the setup. Three configuration files are created and displayed in the screen output. The profile you have created is now available on the Administration Portal and the VM Portal login pages. To assign the user accounts on the LDAP server appropriate roles and permissions, for example, to log in to the VM Portal, see Manager User Tasks . Note For more information, see the LDAP authentication and authorization extension README file at /usr/share/doc/ovirt-engine-extension-aaa-ldap- version . 3.6.3.3. Configuring an External LDAP Provider (Manual Method) Note The ovirt-engine-extension-aaa-ldap is deprecated. For new installations, use Red Hat Single Sign On. For more information, see Installing and Configuring Red Hat Single Sign-On in the Administration Guide . The ovirt-engine-extension-aaa-ldap extension uses the LDAP protocol to access directory servers and is fully customizable. Kerberos authentication is not required unless you want to enable the single sign-on to the VM Portal or the Administration Portal feature. If the interactive setup method in the section does not cover your use case, you can manually modify the configuration files to attach your LDAP server. The following procedure uses generic details. Specific values depend on your setup. Procedure On the Red Hat Virtualization Manager, install the LDAP extension package: # dnf install ovirt-engine-extension-aaa-ldap Copy the LDAP configuration template file into the /etc/ovirt-engine directory. Template files are available for active directories ( ad ) and other directory types ( simple ). This example uses the simple configuration template. # cp -r /usr/share/ovirt-engine-extension-aaa-ldap/examples/simple/. /etc/ovirt-engine Rename the configuration files to match the profile name you want visible to users on the Administration Portal and the VM Portal login pages: # mv /etc/ovirt-engine/aaa/profile1.properties /etc/ovirt-engine/aaa/ example .properties # mv /etc/ovirt-engine/extensions.d/profile1-authn.properties /etc/ovirt-engine/extensions.d/ example -authn.properties # mv /etc/ovirt-engine/extensions.d/profile1-authz.properties /etc/ovirt-engine/extensions.d/ example -authz.properties Edit the LDAP property configuration file by uncommenting an LDAP server type and updating the domain and passwords fields: # vi /etc/ovirt-engine/aaa/ example .properties Example 3.5. Example profile: LDAP server section # Select one # include = <openldap.properties> #include = <389ds.properties> #include = <rhds.properties> #include = <ipa.properties> #include = <iplanet.properties> #include = <rfc2307-389ds.properties> #include = <rfc2307-rhds.properties> #include = <rfc2307-openldap.properties> #include = <rfc2307-edir.properties> #include = <rfc2307-generic.properties> # Server # vars.server = ldap1.company.com # Search user and its password. # vars.user = uid=search,cn=users,cn=accounts,dc= company ,dc= com vars.password = 123456 pool.default.serverset.single.server = USD{global:vars.server} pool.default.auth.simple.bindDN = USD{global:vars.user} pool.default.auth.simple.password = USD{global:vars.password} To use TLS or SSL protocol to interact with the LDAP server, obtain the root CA certificate for the LDAP server and use it to create a public keystore file. Uncomment the following lines and specify the full path to the public keystore file and the password to access the file. Note For more information on creating a public keystore file, see Setting Up SSL or TLS Connections between the Manager and an LDAP Server . Example 3.6. Example profile: keystore section # Create keystore, import certificate chain and uncomment # if using tls. pool.default.ssl.startTLS = true pool.default.ssl.truststore.file = /full/path/to/myrootca.jks pool.default.ssl.truststore.password = password Review the authentication configuration file. The profile name visible to users on the Administration Portal and the VM Portal login pages is defined by ovirt.engine.aaa.authn.profile.name . The configuration profile location must match the LDAP configuration file location. All fields can be left as default. # vi /etc/ovirt-engine/extensions.d/ example -authn.properties Example 3.7. Example authentication configuration file ovirt.engine.extension.name = example -authn ovirt.engine.extension.bindings.method = jbossmodule ovirt.engine.extension.binding.jbossmodule.module = org.ovirt.engine.extension.aaa.ldap ovirt.engine.extension.binding.jbossmodule.class = org.ovirt.engine.extension.aaa.ldap.AuthnExtension ovirt.engine.extension.provides = org.ovirt.engine.api.extensions.aaa.Authn ovirt.engine.aaa.authn.profile.name = example ovirt.engine.aaa.authn.authz.plugin = example -authz config.profile.file.1 = ../aaa/ example .properties Review the authorization configuration file. The configuration profile location must match the LDAP configuration file location. All fields can be left as default. # vi /etc/ovirt-engine/extensions.d/ example -authz.properties Example 3.8. Example authorization configuration file ovirt.engine.extension.name = example -authz ovirt.engine.extension.bindings.method = jbossmodule ovirt.engine.extension.binding.jbossmodule.module = org.ovirt.engine.extension.aaa.ldap ovirt.engine.extension.binding.jbossmodule.class = org.ovirt.engine.extension.aaa.ldap.AuthzExtension ovirt.engine.extension.provides = org.ovirt.engine.api.extensions.aaa.Authz config.profile.file.1 = ../aaa/ example .properties Ensure that the ownership and permissions of the configuration profile are appropriate: # chown ovirt:ovirt /etc/ovirt-engine/aaa/ example .properties # chmod 600 /etc/ovirt-engine/aaa/ example .properties Restart the engine service: # systemctl restart ovirt-engine.service The example profile you have created is now available on the Administration Portal and the VM Portal login pages. To give the user accounts on the LDAP server appropriate permissions, for example, to log in to the VM Portal, see Manager User Tasks . Note For more information, see the LDAP authentication and authorization extension README file at /usr/share/doc/ovirt-engine-extension-aaa-ldap- version . 3.6.3.4. Removing an External LDAP Provider This procedure shows you how to remove an external configured LDAP provider and its users. Procedure Remove the LDAP provider configuration files, replacing the default name profile1 : # rm /etc/ovirt-engine/extensions.d/ profile1 -authn.properties # rm /etc/ovirt-engine/extensions.d/ profile1 -authz.properties # rm /etc/ovirt-engine/aaa/ profile1 .properties Restart the ovirt-engine service: # systemctl restart ovirt-engine In the Administration Portal, in the Users resource tab, select the users of this provider (those whose Authorization provider is profile1 -authz) and click Remove . 3.6.4. Configuring LDAP and Kerberos for Single Sign-on Single sign-on allows users to log in to the VM Portal or the Administration Portal without re-typing their passwords. Authentication credentials are obtained from the Kerberos server. To configure single sign-on to the Administration Portal and the VM Portal, you need to configure two extensions: ovirt-engine-extension-aaa-misc and ovirt-engine-extension-aaa-ldap ; and two Apache modules: mod_auth_gssapi and mod_session . You can configure single sign-on that does not involve Kerberos, however this is outside the scope of this documentation. Note If single sign-on to the VM Portal is enabled, single sign-on to virtual machines is not possible. With single sign-on to the VM Portal enabled, the VM Portal does not need to accept a password, so you cannot delegate the password to sign in to virtual machines. This example assumes the following: The existing Key Distribution Center (KDC) server uses the MIT version of Kerberos 5. You have administrative rights to the KDC server. The Kerberos client is installed on the Red Hat Virtualization Manager and user machines. The kadmin utility is used to create Kerberos service principals and keytab files. This procedure involves the following components: On the KDC server Create a service principal and a keytab file for the Apache service on the Red Hat Virtualization Manager. On the Red Hat Virtualization Manager Install the authentication and authorization extension packages and the Apache Kerberos authentication module. Configure the extension files. 3.6.4.1. Configuring Kerberos for the Apache Service On the KDC server, use the kadmin utility to create a service principal for the Apache service on the Red Hat Virtualization Manager. The service principal is a reference ID to the KDC for the Apache service. # kadmin kadmin> addprinc -randkey HTTP/fqdn-of-rhevm @ REALM.COM Generate a keytab file for the Apache service. The keytab file stores the shared secret key. Note The engine-backup command includes the file /etc/httpd/http.keytab when backing up and restoring. If you use a different name for the keytab file, make sure you back up and restore it. kadmin> ktadd -k /tmp/http.keytab HTTP/fqdn-of-rhevm @ REALM.COM kadmin> quit Copy the keytab file from the KDC server to the Red Hat Virtualization Manager: # scp /tmp/http.keytab root@ rhevm.example.com :/etc/httpd == Configuring Single Sign-on to the VM Portal or Administration Portal On the Red Hat Virtualization Manager, ensure that the ownership and permissions for the keytab are appropriate: # chown apache /etc/httpd/http.keytab # chmod 400 /etc/httpd/http.keytab Install the authentication extension package, LDAP extension package, and the mod_auth_gssapi and mod_session Apache modules: # dnf install ovirt-engine-extension-aaa-misc ovirt-engine-extension-aaa-ldap mod_auth_gssapi mod_session Note The ovirt-engine-extension-aaa-ldap is deprecated. For new installations, use Red Hat Single Sign On. For more information, see Installing and Configuring Red Hat Single Sign-On in the Administration Guide . Copy the SSO configuration template file into the /etc/ovirt-engine directory. Template files are available for Active Directory ( ad-sso ) and other directory types ( simple-sso ). This example uses the simple SSO configuration template. # cp -r /usr/share/ovirt-engine-extension-aaa-ldap/examples/simple-sso/. /etc/ovirt-engine Move ovirt-sso.conf into the Apache configuration directory. Note The engine-backup command includes the file /etc/httpd/conf.d/ovirt-sso.conf when backing up and restoring. If you use a different name for this file, make sure you back up and restore it. # mv /etc/ovirt-engine/aaa/ovirt-sso.conf /etc/httpd/conf.d Review the authentication method file. You do not need to edit this file, as the realm is automatically fetched from the keytab file. # vi /etc/httpd/conf.d/ovirt-sso.conf Example 3.9. Example authentication method file Rename the configuration files to match the profile name you want visible to users on the Administration Portal and the VM Portal login pages: # mv /etc/ovirt-engine/aaa/profile1.properties /etc/ovirt-engine/aaa/ example .properties # mv /etc/ovirt-engine/extensions.d/profile1-http-authn.properties /etc/ovirt-engine/extensions.d/ example -http-authn.properties # mv /etc/ovirt-engine/extensions.d/profile1-http-mapping.properties /etc/ovirt-engine/extensions.d/ example -http-mapping.properties # mv /etc/ovirt-engine/extensions.d/profile1-authz.properties /etc/ovirt-engine/extensions.d/ example -authz.properties Edit the LDAP property configuration file by uncommenting an LDAP server type and updating the domain and passwords fields: # vi /etc/ovirt-engine/aaa/ example .properties Example 3.10. Example profile: LDAP server section # Select one include = <openldap.properties> #include = <389ds.properties> #include = <rhds.properties> #include = <ipa.properties> #include = <iplanet.properties> #include = <rfc2307-389ds.properties> #include = <rfc2307-rhds.properties> #include = <rfc2307-openldap.properties> #include = <rfc2307-edir.properties> #include = <rfc2307-generic.properties> # Server # vars.server = ldap1.company.com # Search user and its password. # vars.user = uid=search,cn=users,cn=accounts,dc=company,dc=com vars.password = 123456 pool.default.serverset.single.server = USD{global:vars.server} pool.default.auth.simple.bindDN = USD{global:vars.user} pool.default.auth.simple.password = USD{global:vars.password} To use TLS or SSL protocol to interact with the LDAP server, obtain the root CA certificate for the LDAP server and use it to create a public keystore file. Uncomment the following lines and specify the full path to the public keystore file and the password to access the file. Note For more information on creating a public keystore file, see Setting Up SSL or TLS Connections between the Manager and an LDAP Server . Example 3.11. Example profile: keystore section # Create keystore, import certificate chain and uncomment # if using ssl/tls. pool.default.ssl.startTLS = true pool.default.ssl.truststore.file = /full/path/to/myrootca.jks pool.default.ssl.truststore.password = password Review the authentication configuration file. The profile name visible to users on the Administration Portal and the VM Portal login pages is defined by ovirt.engine.aaa.authn.profile.name . The configuration profile location must match the LDAP configuration file location. All fields can be left as default. # vi /etc/ovirt-engine/extensions.d/ example -http-authn.properties Example 3.12. Example authentication configuration file ovirt.engine.extension.name = example -http-authn ovirt.engine.extension.bindings.method = jbossmodule ovirt.engine.extension.binding.jbossmodule.module = org.ovirt.engine.extension.aaa.misc ovirt.engine.extension.binding.jbossmodule.class = org.ovirt.engine.extension.aaa.misc.http.AuthnExtension ovirt.engine.extension.provides = org.ovirt.engine.api.extensions.aaa.Authn ovirt.engine.aaa.authn.profile.name = example -http ovirt.engine.aaa.authn.authz.plugin = example -authz ovirt.engine.aaa.authn.mapping.plugin = example -http-mapping config.artifact.name = HEADER config.artifact.arg = X-Remote-User Review the authorization configuration file. The configuration profile location must match the LDAP configuration file location. All fields can be left as default. # vi /etc/ovirt-engine/extensions.d/ example -authz.properties Example 3.13. Example authorization configuration file ovirt.engine.extension.name = example -authz ovirt.engine.extension.bindings.method = jbossmodule ovirt.engine.extension.binding.jbossmodule.module = org.ovirt.engine.extension.aaa.ldap ovirt.engine.extension.binding.jbossmodule.class = org.ovirt.engine.extension.aaa.ldap.AuthzExtension ovirt.engine.extension.provides = org.ovirt.engine.api.extensions.aaa.Authz config.profile.file.1 = ../aaa/ example .properties Review the authentication mapping configuration file. The configuration profile location must match the LDAP configuration file location. The configuration profile extension name must match the ovirt.engine.aaa.authn.mapping.plugin value in the authentication configuration file. All fields can be left as default. # vi /etc/ovirt-engine/extensions.d/ example -http-mapping.properties Example 3.14. Example authentication mapping configuration file Ensure that the ownership and permissions of the configuration files are appropriate: # chown ovirt:ovirt /etc/ovirt-engine/aaa/ example .properties # chown ovirt:ovirt /etc/ovirt-engine/extensions.d/ example -http-authn.properties # chown ovirt:ovirt /etc/ovirt-engine/extensions.d/ example -http-mapping.properties # chown ovirt:ovirt /etc/ovirt-engine/extensions.d/ example -authz.properties # chmod 600 /etc/ovirt-engine/aaa/ example .properties # chmod 640 /etc/ovirt-engine/extensions.d/ example -http-authn.properties # chmod 640 /etc/ovirt-engine/extensions.d/ example -http-mapping.properties # chmod 640 /etc/ovirt-engine/extensions.d/ example -authz.properties Restart the Apache service and the ovirt-engine service: # systemctl restart httpd.service # systemctl restart ovirt-engine.service 3.6.5. Installing and Configuring Red Hat Single Sign-On To use Red Hat Single Sign-On as your authorization method, you need to: Install Red Hat SSO. Configure the LDAP group mapper. Configure Apache on the Manager. Configure OVN provider credentials. Configure the Monitoring Portal (Grafana) Note If Red Hat SSO is configured, LDAP sign ons will not work, as only a single authorization protocol may be used at a time. 3.6.5.1. Installing Red Hat SSO You can install Red Hat Single Sign-On by downloading a ZIP file and unpacking it, or by using an RPM file. Follow the installation instructions at Red Hat SSO Installation Prepare the following information: Path/location of the Open ID Connect server. The subscription channel for the correct repositories. Valid Red Hat subscription login credentials. 3.6.5.2. Configuring the LDAP group mapper Procedure Add the LDAP groups mapper with the following information: Name : ldapgroups Mapper Type : group-ldap-mapper LDAP Groups DN : ou=groups,dc=example,dc=com Group Object Classes : groupofuniquenames ( adapt this class according to your LDAP server setup ) Membership LDAP Attribute : uniquemember ( adapt this class according to your LDAP server setup ) Click Save . Click Sync LDAP Groups to KeyCloak . At the bottom of the User Federation Provider page, click Synchronize all users . In the Clients tab, under Add Client , add ovirt-engine as the Client ID , and enter the engine url as the Root URL . Modify the Client Protocol to openid-connect and the Access Type to confidential . In the Clients tab, under Ovirt-engine > Advanced Settings , increase the Access Token Lifespan . Add https://rhvm.example.com:443/* as a valid redirect URI. The client secret is generated, and can be viewed in the Credentials tab. In the Clients tab under Create Mapper Protocol , create a mapper with the following settings: Name : groups Mapper Type : Group Membership Token Claim Name : groups Full group path : ON Add to ID token : ON Add to access token : ON Add to userinfo : ON Add the Builtin Protocol Mapper for username . Create the scopes needed by ovirt-engine , ovirt-app-api , ovirt-app-admin , and ovirt-ext=auth:sequence-priority=~ . Use the scopes created in the step to set up optional client scopes for the ovirt-engine client. 3.6.5.3. Configuring Apache in the Manager Enable the mod_auth_openidc module. Configure Apache in the Manager Create a new httpd config file ovirt-openidc.conf in /etc/httpd/conf.d/ with the following content: To save the configuration changes, restart httpd and ovirt-engine : Create the file openidc-authn.properties in /etc/ovirt-engine/extensions.d/ with the following content: Create the file openidc-http-mapping.properties in /etc/ovirt-engine/extensions.d/ with the following content: Create the file openidc-authz.properties in /etc/ovirt-engine/extensions.d/ with the following content: Create the file 99-enable-external-auth.conf in /etc/ovirt-engine/engine.conf.d/ with the following content: 3.6.5.4. Configuring OVN If you configured the ovirt-ovn-provider in the Manager, you need to configure the OVN provider credentials. Procedure Create the file 20-setup-ovirt-provider-ovn.conf in /etc/ovirt-provider-ovn/conf.d/ with the following contents, where user1 belongs to the LDAP group ovirt-administrator , and openidchttp is the profile configured for aaa-ldap-misc . Restart the ovirt-provider-ovn : Log in to the Administration Portal, navigate to Administration Providers , select ovirt-provider-ovn , and click Edit to update the password for the ovn provider. 3.6.5.5. Configuring the Monitoring Portal (Grafana) Procedure Configure a client's valid redirect URL: Select the client configured in earlier steps (ie. ovirt-engine) Add an additional valid Redirect URI for the Monitoring Portal(Grafana). Valid Redirect URIs: https://rhvm.example.com:443/ovirt-engine-grafana/login/generic_oauth/ Select the Mappers tab. Click Create to create a new Mapper, and fill in the following fields: Name: realm role Mapper Type: User Realm Role Token Claim Name: realm_access.roles Claim JSON Type: String Configure Grafana specific roles: Select Roles from the main menu. Add the following roles: admin , editor , viewer . Assign Grafana specific roles to the desired groups: Select Groups from the main menu, and choose a desired group. Select Role Mappings . Move the desired roles from Available Roles to Assigned Roles . Configure Grafana - modify the section auth.generic_oauth in /etc/grafana/grafana.ini as follows. Replace the values in arrow brackets < > as needed. 3.6.6. User Authorization 3.6.6.1. User Authorization Model Red Hat Virtualization applies authorization controls based on the combination of the three components: The user performing the action The type of action being performed The object on which the action is being performed 3.6.6.2. User Actions For an action to be successfully performed, the user must have the appropriate permission for the object being acted upon. Each type of action has a corresponding permission . Some actions are performed on more than one object. For example, copying a template to another storage domain will impact both the template and the destination storage domain. The user performing an action must have appropriate permissions for all objects the action impacts. 3.6.7. Administering User Tasks From the Administration Portal 3.6.7.1. The Account Settings window The Administration Account Settings window allows you to view or edit the following Administration Portal user settings: General tab: User Name - read only. E-mail - read only. Home Page : Default - #dashboard-main . Custom home page - enter only the last part of the URL, including hash mark ( # ). For example: #vms-snapshots;name-testVM . Serial Console User's Public Key - enter the SSH public key that is used to access the Manager using a serial console. Tables Persist grid settings - saves the grid column settings on the server. Confirmations tab: Show confirmation dialog on Suspend VM - enables a confirmation dialog when the VM is suspended. 3.6.7.2. Adding Users and Assigning VM Portal Permissions Users must be created already before they can be added and assigned roles and permissions. The roles and permissions assigned in this procedure give the user the permission to log in to the VM Portal and to start creating virtual machines. The procedure also applies to group accounts. Procedure On the header bar, click Administration Configure . This opens the Configure window. Click System Permissions . Click Add . This opens the Add System Permission to User window. Select a profile under Search . The profile is the domain you want to search. Enter a name or part of a name in the search text field, and click GO . Alternatively, click GO to view a list of all users and groups. Select the check boxes for the appropriate users or groups. Select an appropriate role to assign under Role to Assign . The UserRole role gives the user account the permission to log in to the VM Portal. Click OK . Log in to the VM Portal to verify that the user account has the permissions to log in. 3.6.7.3. Viewing User Information Procedure Click Administration Users to display the list of authorized users. Click the user's name. This opens the details view, usually with the General tab displaying general information, such as the domain name, email and status of the user. The other tabs allow you to view groups, permissions, quotas, and events for the user. For example, to view the groups to which the user belongs, click the Directory Groups tab. 3.6.7.4. Viewing User Permissions on Resources Users can be assigned permissions on specific resources or a hierarchy of resources. You can view the assigned users and their permissions on each resource. Procedure Find and click the resource's name. This opens the details view. Click the Permissions tab to list the assigned users, the user's role, and the inherited permissions for the selected resource. 3.6.7.5. Removing Users When a user account is no longer required, remove it from Red Hat Virtualization. Procedure Click Administration Users to display the list of authorized users. Select the user to be removed. Ensure the user is not running any virtual machines. Click Remove , then click OK . The user is removed from Red Hat Virtualization, but not from the external directory. 3.6.7.6. Viewing Logged-In Users You can view the users who are currently logged in, along with session times and other details. Click Administration Active User Sessions to view the Session DB ID , User Name , Authorization provider , User id , Source IP , Session Start Time , and Session Last Active Time for each logged-in user. 3.6.7.7. Terminating a User Session You can terminate the session of a user who is currently logged in. Terminating a User Session Click Administration Active User Sessions . Select the user session to be terminated. Click Terminate Session . Click OK . 3.6.8. Administering User Tasks From the Command Line You can use the ovirt-aaa-jdbc-tool tool to manage user accounts on the internal domain. Changes made using the tool take effect immediately and do not require you to restart the ovirt-engine service. For a full list of user options, run ovirt-aaa-jdbc-tool user --help . Common examples are provided in this section. Important You must be logged into the Manager machine. 3.6.8.1. Creating a New User You can create a new user account. The optional --attribute command specifies account details. For a full list of options, run ovirt-aaa-jdbc-tool user add --help . You can add the newly created user in the Administration Portal and assign the user appropriate roles and permissions. See Adding users for more information. 3.6.8.2. Setting a User Password You can create a password. You must set a value for --password-valid-to , otherwise the password expiry time defaults to the current time. + The date format is yyyy-MM-dd HH:mm:ssX . Where X is the time zone offset from UTC. In this example, -0800 stands for GMT minus 8 hours. For zero offset, use the value Z . + For more options, run ovirt-aaa-jdbc-tool user password-reset --help . Note By default, the password policy for user accounts on the internal domain has the following restrictions: A minimum of 6 characters. Three passwords used cannot be set again during the password change. For more information on the password policy and other default settings, run ovirt-aaa-jdbc-tool settings show . Once the admin password is updated, the change must be manually propagated to ovirt-provider-ovn . Otherwise, the the admin user will become locked because the Red Hat Virtualization Manager will continue to use the old password to synchronize networks from ovirt-provider-ovn . To propogate the new password to ovirt-provider-ovn , do the following: In the Administration Portal, click Administration Providers . Select ovirt-provider-ovn . Click Edit and enter the new password in the Password field. Click Test to test if authentication succeeds with the credentials you provided. When the authentication test succeeds, click OK . 3.6.8.3. Setting User Timeout You can set the user timeout period: # engine-config --set UserSessionTimeOutInterval= integer 3.6.8.4. Pre-encrypting a User Password You can create a pre-encrypted user password using the ovirt-engine-crypto-tool script. This option is useful if you are adding users and passwords to the database with a script. Note Passwords are stored in the Manager database in encrypted form. The ovirt-engine-crypto-tool script is used because all passwords must be encrypted with the same algorithm. If the password is pre-encrypted, password validity tests cannot be performed. The password will be accepted even if it does not comply with the password validation policy. Run the following command: # /usr/share/ovirt-engine/bin/ovirt-engine-crypto-tool.sh pbe-encode The script will prompt you to enter the password. Alternatively, you can use the --password=file: file option to encrypt a single password that appears as the first line of a file. This option is useful for automation. In the following example, file is a text file containing a single password for encryption: # /usr/share/ovirt-engine/bin/ovirt-engine-crypto-tool.sh pbe-encode --password=file: file Set the new password with the ovirt-aaa-jdbc-tool script, using the --encrypted option: # ovirt-aaa-jdbc-tool user password-reset test1 --password-valid-to="2025-08-01 12:00:00-0800" --encrypted Enter and confirm the encrypted password: 3.6.8.5. Viewing User Information You can view detailed user account information: # ovirt-aaa-jdbc-tool user show test1 This command displays more information than in the Administration Portal's Administration Users screen. 3.6.8.6. Editing User Information You can update user information, such as the email address: 3.6.8.7. Removing a User You can remove a user account: # ovirt-aaa-jdbc-tool user delete test1 Remove the user from the Administration Portal. See Removing Users for more information. 3.6.8.8. Disabling the Internal Administrative User You can disable users on the local domains including the admin@internal user created during engine-setup . Make sure you have at least one user in the envrionment with full administrative permissions before disabling the default admin user. Procedure Log in to the machine on which the Red Hat Virtualization Manager is installed. Make sure another user with the SuperUser role has been added to the environment. See Adding users for more information. Disable the default admin user: # ovirt-aaa-jdbc-tool user edit admin --flag=+disabled Note To enable a disabled user, run ovirt-aaa-jdbc-tool user edit username --flag=-disabled 3.6.8.9. Managing Groups You can use the ovirt-aaa-jdbc-tool tool to manage group accounts on your internal domain. Managing group accounts is similar to managing user accounts. For a full list of group options, run ovirt-aaa-jdbc-tool group --help . Common examples are provided in this section. Creating a Group This procedure shows you how to create a group account, add users to the group, and view the details of the group. Log in to the machine on which the Red Hat Virtualization Manager is installed. Create a new group: # ovirt-aaa-jdbc-tool group add group1 Add users to the group. The users must be created already. # ovirt-aaa-jdbc-tool group-manage useradd group1 --user= test1 Note For a full list of the group-manage options, run ovirt-aaa-jdbc-tool group-manage --help . View group account details: # ovirt-aaa-jdbc-tool group show group1 Add the newly created group in the Administration Portal and assign the group appropriate roles and permissions. The users in the group inherit the roles and permissions of the group. See Adding users for more information. Creating Nested Groups This procedure shows you how to create groups within groups. Log in to the machine on which the Red Hat Virtualization Manager is installed. Create the first group: # ovirt-aaa-jdbc-tool group add group1 Create the second group: # ovirt-aaa-jdbc-tool group add group1-1 Add the second group to the first group: # ovirt-aaa-jdbc-tool group-manage groupadd group1 --group= group1-1 Add the first group in the Administration Portal and assign the group appropriate roles and permissions. See Adding users for more information. 3.6.8.10. Querying Users and Groups The query module allows you to query user and group information. For a full list of options, run ovirt-aaa-jdbc-tool query --help . Listing All User or Group Account Details This procedure shows you how to list all account information. Log in to the machine on which the Red Hat Virtualization Manager is installed. List the account details. All user account details: # ovirt-aaa-jdbc-tool query --what=user All group account details: # ovirt-aaa-jdbc-tool query --what=group Listing Filtered Account Details This procedure shows you how to apply filters when listing account information. Log in to the machine on which the Red Hat Virtualization Manager is installed. Filter account details using the --pattern parameter. List user account details with names that start with the character j . # ovirt-aaa-jdbc-tool query --what=user --pattern="name= j* " List groups that have the department attribute set to marketing : # ovirt-aaa-jdbc-tool query --what=group --pattern="department= marketing " 3.6.8.11. Managing Account Settings To change the default account settings, use the ovirt-aaa-jdbc-tool settings module. Updating Account Settings This procedure shows you how to update the default account settings. Log in to the machine on which the Red Hat Virtualization Manager is installed. Run the following command to show all the settings available: # ovirt-aaa-jdbc-tool settings show Change the desired settings: This example updates the default log in session time to 60 minutes for all user accounts. The default value is 10080 minutes. # ovirt-aaa-jdbc-tool settings set --name=MAX_LOGIN_MINUTES --value= 60 This example updates the number of failed login attempts a user can perform before the user account is locked. The default value is 5. # ovirt-aaa-jdbc-tool settings set --name=MAX_FAILURES_SINCE_SUCCESS --value= 3 Note To unlock a locked user account, run ovirt-aaa-jdbc-tool user unlock test1 . 3.6.9. Configuring Additional Local Domains Creating additional local domains other than the default internal domain is also supported. This can be done using the ovirt-engine-extension-aaa-jdbc extension and allows you to create multiple domains without attaching external directory servers, though the use case may not be common for enterprise environments. Additionally created local domains will not get upgraded autonmatically during standard Red Hat Virtualization upgrades and need to be upgraded manually for each future release. For more information on creating additional local domains and how to upgrade the domains, see the README file at /usr/share/doc/ovirt-engine-extension-aaa-jdbc- version /README.admin . Note The ovirt-engine-extension-aaa-jdbc extension is deprecated. For new installations, use Red Hat Single Sign On. For more information, see Installing and Configuring Red Hat Single Sign-On in the Administration Guide .
|
[
"dnf install ovirt-engine-extension-aaa-ldap-setup",
"ovirt-engine-extension-aaa-ldap-setup",
"Available LDAP implementations: 1 - 389ds 2 - 389ds RFC-2307 Schema 3 - Active Directory 4 - IBM Security Directory Server 5 - IBM Security Directory Server RFC-2307 Schema 6 - IPA 7 - Novell eDirectory RFC-2307 Schema 8 - OpenLDAP RFC-2307 Schema 9 - OpenLDAP Standard Schema 10 - Oracle Unified Directory RFC-2307 Schema 11 - RFC-2307 Schema (Generic) 12 - RHDS 13 - RHDS RFC-2307 Schema 14 - iPlanet Please select:",
"It is highly recommended to use DNS resolution for LDAP server. If for some reason you intend to use hosts or plain address disable DNS usage. Use DNS (Yes, No) [Yes]:",
"_service._protocol . domain_name",
"1 - Single server 2 - DNS domain LDAP SRV record 3 - Round-robin between multiple hosts 4 - Failover between multiple hosts Please select:",
"NOTE: It is highly recommended to use secure protocol to access the LDAP server. Protocol startTLS is the standard recommended method to do so. Only in cases in which the startTLS is not supported, fallback to non standard ldaps protocol. Use plain for test environments only. Please select protocol to use (startTLS, ldaps, plain) [startTLS]: startTLS Please select method to obtain PEM encoded CA certificate (File, URL, Inline, System, Insecure): Please enter the password:",
"Enter search user DN (for example uid=username,dc=example,dc=com or leave empty for anonymous): uid=user1,ou=Users,ou=department-1,dc=example,dc=com Enter search user password:",
"Please enter base DN (dc=redhat,dc=com) [dc=redhat,dc=com]: ou=department-1,dc=redhat,dc=com",
"Are you going to use Single Sign-On for Virtual Machines (Yes, No) [Yes]:",
"Please specify profile name that will be visible to users: redhat.com",
"NOTE: It is highly recommended to test drive the configuration before applying it into engine. Login sequence is executed automatically, but it is recommended to also execute Search sequence manually after successful Login sequence. Please provide credentials to test login flow: Enter user name: Enter user password: [ INFO ] Executing login sequence... ... [ INFO ] Login sequence executed successfully",
"Please make sure that user details are correct and group membership meets expectations (search for PrincipalRecord and GroupRecord titles). Abort if output is incorrect. Select test sequence to execute (Done, Abort, Login, Search) [Abort]:",
"Select test sequence to execute (Done, Abort, Login, Search) [Search]: Search Select entity to search (Principal, Group) [Principal]: Term to search, trailing '*' is allowed: testuser1 Resolve Groups (Yes, No) [No]:",
"Select test sequence to execute (Done, Abort, Login, Search) [Abort]: Done [ INFO ] Stage: Transaction setup [ INFO ] Stage: Misc configuration [ INFO ] Stage: Package installation [ INFO ] Stage: Misc configuration [ INFO ] Stage: Transaction commit [ INFO ] Stage: Closing up CONFIGURATION SUMMARY Profile name is: redhat.com The following files were created: /etc/ovirt-engine/aaa/redhat.com.properties /etc/ovirt-engine/extensions.d/redhat.com.properties /etc/ovirt-engine/extensions.d/redhat.com-authn.properties [ INFO ] Stage: Clean up Log file is available at /tmp/ovirt-engine-extension-aaa-ldap-setup-20171004101225-mmneib.log: [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination",
"systemctl restart ovirt-engine.service",
"dnf install ovirt-engine-extension-aaa-ldap-setup",
"ovirt-engine-extension-aaa-ldap-setup",
"Available LDAP implementations: 1 - 389ds 2 - 389ds RFC-2307 Schema 3 - Active Directory 4 - IBM Security Directory Server 5 - IBM Security Directory Server RFC-2307 Schema 6 - IPA 7 - Novell eDirectory RFC-2307 Schema 8 - OpenLDAP RFC-2307 Schema 9 - OpenLDAP Standard Schema 10 - Oracle Unified Directory RFC-2307 Schema 11 - RFC-2307 Schema (Generic) 12 - RHDS 13 - RHDS RFC-2307 Schema 14 - iPlanet Please select: 3",
"Please enter Active Directory Forest name: ad-example.redhat.com [ INFO ] Resolving Global Catalog SRV record for ad-example.redhat.com [ INFO ] Resolving LDAP SRV record for ad-example.redhat.com",
"NOTE: It is highly recommended to use secure protocol to access the LDAP server. Protocol startTLS is the standard recommended method to do so. Only in cases in which the startTLS is not supported, fallback to non standard ldaps protocol. Use plain for test environments only. Please select protocol to use (startTLS, ldaps, plain) [startTLS]: startTLS Please select method to obtain PEM encoded CA certificate (File, URL, Inline, System, Insecure): File Please enter the password:",
"Enter search user DN (empty for anonymous): cn=user1,ou=Users,dc=test,dc=redhat,dc=com Enter search user password:",
"Are you going to use Single Sign-On for Virtual Machines (Yes, No) [Yes]:",
"Please specify profile name that will be visible to users: redhat.com",
"NOTE: It is highly recommended to test drive the configuration before applying it into engine. Login sequence is executed automatically, but it is recommended to also execute Search sequence manually after successful Login sequence. Select test sequence to execute (Done, Abort, Login, Search) [Abort]: Login Enter search user name: testuser1 Enter search user password: [ INFO ] Executing login sequence Select test sequence to execute (Done, Abort, Login, Search) [Abort]: Search Select entity to search (Principal, Group) [Principal]: Term to search, trailing '*' is allowed: testuser1 Resolve Groups (Yes, No) [No]: [ INFO ] Executing login sequence Select test sequence to execute (Done, Abort, Login, Search) [Abort]: Done [ INFO ] Stage: Transaction setup [ INFO ] Stage: Misc configuration [ INFO ] Stage: Package installation [ INFO ] Stage: Misc configuration [ INFO ] Stage: Transaction commit [ INFO ] Stage: Closing up CONFIGURATION SUMMARY Profile name is: redhat.com The following files were created: /etc/ovirt-engine/aaa/ redhat.com .properties /etc/ovirt-engine/extensions.d/ redhat.com -authz.properties /etc/ovirt-engine/extensions.d/ redhat.com -authn.properties [ INFO ] Stage: Clean up Log file is available at /tmp/ovirt-engine-extension-aaa-ldap-setup-20160114064955-1yar9i.log: [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination",
"dnf install ovirt-engine-extension-aaa-ldap",
"cp -r /usr/share/ovirt-engine-extension-aaa-ldap/examples/simple/. /etc/ovirt-engine",
"mv /etc/ovirt-engine/aaa/profile1.properties /etc/ovirt-engine/aaa/ example .properties mv /etc/ovirt-engine/extensions.d/profile1-authn.properties /etc/ovirt-engine/extensions.d/ example -authn.properties mv /etc/ovirt-engine/extensions.d/profile1-authz.properties /etc/ovirt-engine/extensions.d/ example -authz.properties",
"vi /etc/ovirt-engine/aaa/ example .properties",
"Select one # include = <openldap.properties> #include = <389ds.properties> #include = <rhds.properties> #include = <ipa.properties> #include = <iplanet.properties> #include = <rfc2307-389ds.properties> #include = <rfc2307-rhds.properties> #include = <rfc2307-openldap.properties> #include = <rfc2307-edir.properties> #include = <rfc2307-generic.properties> Server # vars.server = ldap1.company.com Search user and its password. # vars.user = uid=search,cn=users,cn=accounts,dc= company ,dc= com vars.password = 123456 pool.default.serverset.single.server = USD{global:vars.server} pool.default.auth.simple.bindDN = USD{global:vars.user} pool.default.auth.simple.password = USD{global:vars.password}",
"Create keystore, import certificate chain and uncomment if using tls. pool.default.ssl.startTLS = true pool.default.ssl.truststore.file = /full/path/to/myrootca.jks pool.default.ssl.truststore.password = password",
"vi /etc/ovirt-engine/extensions.d/ example -authn.properties",
"ovirt.engine.extension.name = example -authn ovirt.engine.extension.bindings.method = jbossmodule ovirt.engine.extension.binding.jbossmodule.module = org.ovirt.engine.extension.aaa.ldap ovirt.engine.extension.binding.jbossmodule.class = org.ovirt.engine.extension.aaa.ldap.AuthnExtension ovirt.engine.extension.provides = org.ovirt.engine.api.extensions.aaa.Authn ovirt.engine.aaa.authn.profile.name = example ovirt.engine.aaa.authn.authz.plugin = example -authz config.profile.file.1 = ../aaa/ example .properties",
"vi /etc/ovirt-engine/extensions.d/ example -authz.properties",
"ovirt.engine.extension.name = example -authz ovirt.engine.extension.bindings.method = jbossmodule ovirt.engine.extension.binding.jbossmodule.module = org.ovirt.engine.extension.aaa.ldap ovirt.engine.extension.binding.jbossmodule.class = org.ovirt.engine.extension.aaa.ldap.AuthzExtension ovirt.engine.extension.provides = org.ovirt.engine.api.extensions.aaa.Authz config.profile.file.1 = ../aaa/ example .properties",
"chown ovirt:ovirt /etc/ovirt-engine/aaa/ example .properties chmod 600 /etc/ovirt-engine/aaa/ example .properties",
"systemctl restart ovirt-engine.service",
"rm /etc/ovirt-engine/extensions.d/ profile1 -authn.properties rm /etc/ovirt-engine/extensions.d/ profile1 -authz.properties rm /etc/ovirt-engine/aaa/ profile1 .properties",
"systemctl restart ovirt-engine",
"kadmin kadmin> addprinc -randkey HTTP/fqdn-of-rhevm @ REALM.COM",
"kadmin> ktadd -k /tmp/http.keytab HTTP/fqdn-of-rhevm @ REALM.COM kadmin> quit",
"scp /tmp/http.keytab root@ rhevm.example.com :/etc/httpd",
"chown apache /etc/httpd/http.keytab chmod 400 /etc/httpd/http.keytab",
"dnf install ovirt-engine-extension-aaa-misc ovirt-engine-extension-aaa-ldap mod_auth_gssapi mod_session",
"cp -r /usr/share/ovirt-engine-extension-aaa-ldap/examples/simple-sso/. /etc/ovirt-engine",
"mv /etc/ovirt-engine/aaa/ovirt-sso.conf /etc/httpd/conf.d",
"vi /etc/httpd/conf.d/ovirt-sso.conf",
"<LocationMatch ^/ovirt-engine/sso/(interactive-login-negotiate|oauth/token-http-auth)|^/ovirt-engine/api> <If \"req('Authorization') !~ /^(Bearer|Basic)/i\"> RewriteEngine on RewriteCond %{LA-U:REMOTE_USER} ^(.*)USD RewriteRule ^(.*)USD - [L,NS,P,E=REMOTE_USER:%1] RequestHeader set X-Remote-User %{REMOTE_USER}s AuthType GSSAPI AuthName \"Kerberos Login\" # Modify to match installation GssapiCredStore keytab:/etc/httpd/http.keytab GssapiUseSessions On Session On SessionCookieName ovirt_gssapi_session path=/private;httponly;secure; Require valid-user ErrorDocument 401 \"<html><meta http-equiv=\\\"refresh\\\" content=\\\"0; url=/ovirt-engine/sso/login-unauthorized\\\"/><body><a href=\\\"/ovirt-engine/sso/login-unauthorized\\\">Here</a></body></html>\" </If> </LocationMatch>",
"mv /etc/ovirt-engine/aaa/profile1.properties /etc/ovirt-engine/aaa/ example .properties",
"mv /etc/ovirt-engine/extensions.d/profile1-http-authn.properties /etc/ovirt-engine/extensions.d/ example -http-authn.properties",
"mv /etc/ovirt-engine/extensions.d/profile1-http-mapping.properties /etc/ovirt-engine/extensions.d/ example -http-mapping.properties",
"mv /etc/ovirt-engine/extensions.d/profile1-authz.properties /etc/ovirt-engine/extensions.d/ example -authz.properties",
"vi /etc/ovirt-engine/aaa/ example .properties",
"Select one include = <openldap.properties> #include = <389ds.properties> #include = <rhds.properties> #include = <ipa.properties> #include = <iplanet.properties> #include = <rfc2307-389ds.properties> #include = <rfc2307-rhds.properties> #include = <rfc2307-openldap.properties> #include = <rfc2307-edir.properties> #include = <rfc2307-generic.properties> Server # vars.server = ldap1.company.com Search user and its password. # vars.user = uid=search,cn=users,cn=accounts,dc=company,dc=com vars.password = 123456 pool.default.serverset.single.server = USD{global:vars.server} pool.default.auth.simple.bindDN = USD{global:vars.user} pool.default.auth.simple.password = USD{global:vars.password}",
"Create keystore, import certificate chain and uncomment if using ssl/tls. pool.default.ssl.startTLS = true pool.default.ssl.truststore.file = /full/path/to/myrootca.jks pool.default.ssl.truststore.password = password",
"vi /etc/ovirt-engine/extensions.d/ example -http-authn.properties",
"ovirt.engine.extension.name = example -http-authn ovirt.engine.extension.bindings.method = jbossmodule ovirt.engine.extension.binding.jbossmodule.module = org.ovirt.engine.extension.aaa.misc ovirt.engine.extension.binding.jbossmodule.class = org.ovirt.engine.extension.aaa.misc.http.AuthnExtension ovirt.engine.extension.provides = org.ovirt.engine.api.extensions.aaa.Authn ovirt.engine.aaa.authn.profile.name = example -http ovirt.engine.aaa.authn.authz.plugin = example -authz ovirt.engine.aaa.authn.mapping.plugin = example -http-mapping config.artifact.name = HEADER config.artifact.arg = X-Remote-User",
"vi /etc/ovirt-engine/extensions.d/ example -authz.properties",
"ovirt.engine.extension.name = example -authz ovirt.engine.extension.bindings.method = jbossmodule ovirt.engine.extension.binding.jbossmodule.module = org.ovirt.engine.extension.aaa.ldap ovirt.engine.extension.binding.jbossmodule.class = org.ovirt.engine.extension.aaa.ldap.AuthzExtension ovirt.engine.extension.provides = org.ovirt.engine.api.extensions.aaa.Authz config.profile.file.1 = ../aaa/ example .properties",
"vi /etc/ovirt-engine/extensions.d/ example -http-mapping.properties",
"ovirt.engine.extension.name = example -http-mapping ovirt.engine.extension.bindings.method = jbossmodule ovirt.engine.extension.binding.jbossmodule.module = org.ovirt.engine.extension.aaa.misc ovirt.engine.extension.binding.jbossmodule.class = org.ovirt.engine.extension.aaa.misc.mapping.MappingExtension ovirt.engine.extension.provides = org.ovirt.engine.api.extensions.aaa.Mapping config.mapAuthRecord.type = regex config.mapAuthRecord.regex.mustMatch = true config.mapAuthRecord.regex.pattern = ^(?<user>.*?)((\\\\\\\\(?<at>@)(?<suffix>.*?)@.*)|(?<realm>@.*))USD config.mapAuthRecord.regex.replacement = USD{user}USD{at}USD{suffix}",
"chown ovirt:ovirt /etc/ovirt-engine/aaa/ example .properties",
"chown ovirt:ovirt /etc/ovirt-engine/extensions.d/ example -http-authn.properties",
"chown ovirt:ovirt /etc/ovirt-engine/extensions.d/ example -http-mapping.properties",
"chown ovirt:ovirt /etc/ovirt-engine/extensions.d/ example -authz.properties",
"chmod 600 /etc/ovirt-engine/aaa/ example .properties",
"chmod 640 /etc/ovirt-engine/extensions.d/ example -http-authn.properties",
"chmod 640 /etc/ovirt-engine/extensions.d/ example -http-mapping.properties",
"chmod 640 /etc/ovirt-engine/extensions.d/ example -authz.properties",
"systemctl restart httpd.service systemctl restart ovirt-engine.service",
"dnf module enable mod_auth_openidc:2.3 -y",
"dnf install mod_auth_openidc",
"LoadModule auth_openidc_module modules/mod_auth_openidc.so OIDCProviderMetadataURL https://SSO.example.com/auth/realms/master/.well-known/openid-configuration OIDCSSLValidateServer Off OIDCClientID ovirt-engine OIDCClientSecret <client_SSO _generated_key> OIDCRedirectURI https://rhvm.example.com/ovirt-engine/callback OIDCDefaultURL https://rhvm.example.com/ovirt-engine/login?scope=ovirt-app-admin+ovirt-app-portal+ovirt-ext%3Dauth%3Asequence-priority%3D%7E maps the prefered_username claim to the REMOTE_USER environment variable: OIDCRemoteUserClaim <preferred_username> OIDCCryptoPassphrase <random1234> <LocationMatch ^/ovirt-engine/sso/(interactive-login-negotiate|oauth/token-http-auth)|^/ovirt-engine/callback> <If \"req('Authorization') !~ /^(Bearer|Basic)/i\"> Require valid-user AuthType openid-connect ErrorDocument 401 \"<html><meta http-equiv=\\\"refresh\\\" content=\\\"0; url=/ovirt-engine/sso/login-unauthorized\\\"/><body><a href=\\\"/ovirt-engine/sso/login-unauthorized\\\">Here</a></body></html>\" </If> </LocationMatch> OIDCOAuthIntrospectionEndpoint https://SSO.example.com/auth/realms/master/protocol/openid-connect/token/introspect OIDCOAuthSSLValidateServer Off OIDCOAuthIntrospectionEndpointParams token_type_hint=access_token OIDCOAuthClientID ovirt-engine OIDCOAuthClientSecret <client_SSO _generated_key> OIDCOAuthRemoteUserClaim sub <LocationMatch ^/ovirt-engine/(apiUSD|api/)> AuthType oauth20 Require valid-user </LocationMatch>",
"systemctl restart httpd systemctl restart ovirt-engine",
"ovirt.engine.extension.name = openidc-authn ovirt.engine.extension.bindings.method = jbossmodule ovirt.engine.extension.binding.jbossmodule.module = org.ovirt.engine.extension.aaa.misc ovirt.engine.extension.binding.jbossmodule.class = org.ovirt.engine.extension.aaa.misc.http.AuthnExtension ovirt.engine.extension.provides = org.ovirt.engine.api.extensions.aaa.Authn ovirt.engine.aaa.authn.profile.name = openidchttp ovirt.engine.aaa.authn.authz.plugin = openidc-authz ovirt.engine.aaa.authn.mapping.plugin = openidc-http-mapping config.artifact.name = HEADER config.artifact.arg = OIDC_CLAIM_preferred_username",
"ovirt.engine.extension.name = openidc-http-mapping ovirt.engine.extension.bindings.method = jbossmodule ovirt.engine.extension.binding.jbossmodule.module = org.ovirt.engine.extension.aaa.misc ovirt.engine.extension.binding.jbossmodule.class = org.ovirt.engine.extension.aaa.misc.mapping.MappingExtension ovirt.engine.extension.provides = org.ovirt.engine.api.extensions.aaa.Mapping config.mapAuthRecord.type = regex config.mapAuthRecord.regex.mustMatch = false config.mapAuthRecord.regex.pattern = ^(?<user>.*?)((\\\\\\\\(?<at>@)(?<suffix>.*?)@.*)|(?<realm>@.*))USD config.mapAuthRecord.regex.replacement = USD{user}USD{at}USD{suffix}",
"ovirt.engine.extension.name = openidc-authz ovirt.engine.extension.bindings.method = jbossmodule ovirt.engine.extension.binding.jbossmodule.module = org.ovirt.engine.extension.aaa.misc ovirt.engine.extension.binding.jbossmodule.class = org.ovirt.engine.extension.aaa.misc.http.AuthzExtension ovirt.engine.extension.provides = org.ovirt.engine.api.extensions.aaa.Authz config.artifact.name.arg = OIDC_CLAIM_preferred_username config.artifact.groups.arg = OIDC_CLAIM_groups",
"ENGINE_SSO_ENABLE_EXTERNAL_SSO=true ENGINE_SSO_EXTERNAL_SSO_LOGOUT_URI=\"USD{ENGINE_URI}/callback\" EXTERNAL_OIDC_USER_INFO_END_POINT=https://SSO.example.com/auth/realms/master/protocol/openid-connect/userinfo EXTERNAL_OIDC_TOKEN_END_POINT=https://SSO.example.com/auth/realms/master/protocol/openid-connect/token EXTERNAL_OIDC_LOGOUT_END_POINT=https://SSO.example.com/auth/realms/master/protocol/openid-connect/logout EXTERNAL_OIDC_CLIENT_ID=ovirt-engine EXTERNAL_OIDC_CLIENT_SECRET=\"<client_SSO _generated_key>\" EXTERNAL_OIDC_HTTPS_PKI_TRUST_STORE=\"/etc/pki/java/cacerts\" EXTERNAL_OIDC_HTTPS_PKI_TRUST_STORE_PASSWORD=\"\" EXTERNAL_OIDC_SSL_VERIFY_CHAIN=false EXTERNAL_OIDC_SSL_VERIFY_HOST=false",
"[OVIRT] ovirt-admin-user-name=user1@openidchttp",
"systemctl restart ovirt-provider-ovn",
"(...) #################################### Generic OAuth ####################### [auth.generic_oauth] name = oVirt Engine Auth enabled = true allow_sign_up = true client_id = ovirt-engine client_secret = <client-secret-of-RH-SSO> scopes = openid,ovirt-app-admin,ovirt-app-portal,ovirt-ext=auth:sequence-priority=~ email_attribute_name = email:primary role_attribute_path = \"contains(realm_access.roles[*], 'admin') && 'Admin' || contains(realm_access.roles[*], 'editor') && 'Editor' || 'Viewer'\" auth_url = https://<rh-sso-hostname>/auth/realms/<RH-SSO-REALM>/protocol/openid-connect/auth token_url = https://<rh-sso-hostname>/auth/realms/<RH-SSO-REALM>/protocol/openid-connect/token api_url = https://<rh-sso-hostname>/auth/realms/<RH-SSO-REALM>/protocol/openid-connect/userinfo team_ids = allowed_organizations = tls_skip_verify_insecure = false tls_client_cert = tls_client_key = tls_client_ca = /etc/pki/ovirt-engine/apache-ca.pem send_client_credentials_via_post = false (...)",
"ovirt-aaa-jdbc-tool user add test1 --attribute=firstName= John --attribute=lastName= Doe adding user test1 user added successfully",
"ovirt-aaa-jdbc-tool user password-reset test1 --password-valid-to= \"2025-08-01 12:00:00-0800\" Password: updating user test1 user updated successfully",
"engine-config --set UserSessionTimeOutInterval= integer",
"/usr/share/ovirt-engine/bin/ovirt-engine-crypto-tool.sh pbe-encode",
"/usr/share/ovirt-engine/bin/ovirt-engine-crypto-tool.sh pbe-encode --password=file: file",
"ovirt-aaa-jdbc-tool user password-reset test1 --password-valid-to=\"2025-08-01 12:00:00-0800\" --encrypted",
"Password: Reenter password: updating user test1 user updated successfully",
"ovirt-aaa-jdbc-tool user show test1",
"ovirt-aaa-jdbc-tool user edit test1 [email protected]",
"ovirt-aaa-jdbc-tool user delete test1",
"ovirt-aaa-jdbc-tool user edit admin --flag=+disabled",
"ovirt-aaa-jdbc-tool group add group1",
"ovirt-aaa-jdbc-tool group-manage useradd group1 --user= test1",
"ovirt-aaa-jdbc-tool group show group1",
"ovirt-aaa-jdbc-tool group add group1",
"ovirt-aaa-jdbc-tool group add group1-1",
"ovirt-aaa-jdbc-tool group-manage groupadd group1 --group= group1-1",
"ovirt-aaa-jdbc-tool query --what=user",
"ovirt-aaa-jdbc-tool query --what=group",
"ovirt-aaa-jdbc-tool query --what=user --pattern=\"name= j* \"",
"ovirt-aaa-jdbc-tool query --what=group --pattern=\"department= marketing \"",
"ovirt-aaa-jdbc-tool settings show",
"ovirt-aaa-jdbc-tool settings set --name=MAX_LOGIN_MINUTES --value= 60",
"ovirt-aaa-jdbc-tool settings set --name=MAX_FAILURES_SINCE_SUCCESS --value= 3"
] |
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/administration_guide/chap-users_and_roles
|
Chapter 106. Openapi Java
|
Chapter 106. Openapi Java The Rest DSL can be integrated with the camel-openapi-java module which is used for exposing the REST services and their APIs using OpenApi . The camel-openapi-java module can be used from the REST components (without the need for servlet). 106.1. Dependencies When using openapi-java with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-openapi-java-starter</artifactId> </dependency> 106.2. Using OpenApi in rest-dsl You can enable the OpenApi api from the rest-dsl by configuring the apiContextPath dsl as shown below: public class UserRouteBuilder extends RouteBuilder { @Override public void configure() throws Exception { // configure we want to use servlet as the component for the rest DSL // and we enable json binding mode restConfiguration().component("netty-http").bindingMode(RestBindingMode.json) // and output using pretty print .dataFormatProperty("prettyPrint", "true") // setup context path and port number that netty will use .contextPath("/").port(8080) // add OpenApi api-doc out of the box .apiContextPath("/api-doc") .apiProperty("api.title", "User API").apiProperty("api.version", "1.2.3") // and enable CORS .apiProperty("cors", "true"); // this user REST service is json only rest("/user").description("User rest service") .consumes("application/json").produces("application/json") .get("/{id}").description("Find user by id").outType(User.class) .param().name("id").type(path).description("The id of the user to get").dataType("int").endParam() .to("bean:userService?method=getUser(USD{header.id})") .put().description("Updates or create a user").type(User.class) .param().name("body").type(body).description("The user to update or create").endParam() .to("bean:userService?method=updateUser") .get("/findAll").description("Find all users").outType(User[].class) .to("bean:userService?method=listUsers"); } } 106.3. Options The OpenApi module can be configured using the following options. To configure using a servlet you use the init-param as shown above. When configuring directly in the rest-dsl, you use the appropriate method, such as enableCORS , host,contextPath , dsl. The options with api.xxx is configured using apiProperty dsl. Option Type Description cors Boolean Whether to enable CORS. Notice this only enables CORS for the api browser, and not the actual access to the REST services. Is default false. openapi.version String OpenApi spec version. Is default 3.0. host String To setup the hostname. If not configured camel-openapi-java will calculate the name as localhost based. schemes String The protocol schemes to use. Multiple values can be separated by comma such as "http,https". The default value is "http". base.path String Required : To setup the base path where the REST services is available. The path is relative (eg do not start with http/https) and camel-openapi-java will calculate the absolute base path at runtime, which will be protocol://host:port/context-path/base.path api.path String To setup the path where the API is available (eg /api-docs). The path is relative (eg do not start with http/https) and camel-openapi-java will calculate the absolute base path at runtime, which will be protocol://host:port/context-path/api.path So using relative paths is much easier. See above for an example. api.version String The version of the api. Is default 0.0.0. api.title String The title of the application. api.description String A short description of the application. api.termsOfService String A URL to the Terms of Service of the API. api.contact.name String Name of person or organization to contact api.contact.email String An email to be used for API-related correspondence. api.contact.url String A URL to a website for more contact information. api.license.name String The license name used for the API. api.license.url String A URL to the license used for the API. 106.4. Adding Security Definitions in API doc The Rest DSL now supports declaring OpenApi securityDefinitions in the generated API document. For example as shown below: rest("/user").tag("dude").description("User rest service") // setup security definitions .securityDefinitions() .oauth2("petstore_auth").authorizationUrl("http://petstore.swagger.io/oauth/dialog").end() .apiKey("api_key").withHeader("myHeader").end() .end() .consumes("application/json").produces("application/json") Here we have setup two security definitions OAuth2 - with implicit authorization with the provided url Api Key - using an api key that comes from HTTP header named myHeader Then you need to specify on the rest operations which security to use by referring to their key (petstore_auth or api_key). .get("/{id}/{date}").description("Find user by id and date").outType(User.class) .security("api_key") ... .put().description("Updates or create a user").type(User.class) .security("petstore_auth", "write:pets,read:pets") Here the get operation is using the Api Key security and the put operation is using OAuth security with permitted scopes of read and write pets. 106.5. JSon or Yaml The camel-openapi-java module supports both JSon and Yaml out of the box. You can specify in the request url what you want returned by using /openapi.json or /openapi.yaml for either one. If none is specified then the HTTP Accept header is used to detect if json or yaml can be accepted. If either both is accepted or none was set as accepted then json is returned as the default format. 106.6. useXForwardHeaders and API URL resolution The OpenApi specification allows you to specify the host, port & path that is serving the API. In OpenApi V2 this is done via the host field and in OpenAPI V3 it is part of the servers field. By default, the value for these fields is determined by X-Forwarded headers, X-Forwarded-Host & X-Forwarded-Proto . This can be overridden by disabling the lookup of X-Forwarded headers and by specifying your own host, port & scheme on the REST configuration. restConfiguration().component("netty-http") .useXForwardHeaders(false) .apiProperty("schemes", "https"); .host("localhost") .port(8080); 106.7. Examples In the Apache Camel distribution we ship the camel-example-openapi-cdi and camel-example-spring-boot-rest-openapi-simple which demonstrates using this OpenApi component. 106.8. Spring Boot Auto-Configuration The component supports 1 options, which are listed below. Name Description Default Type camel.openapi.enabled Enables Camel Rest DSL to automatic register its OpenAPI (eg swagger doc) in Spring Boot which allows tooling such as SpringDoc to integrate with Camel. true Boolean
|
[
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-openapi-java-starter</artifactId> </dependency>",
"public class UserRouteBuilder extends RouteBuilder { @Override public void configure() throws Exception { // configure we want to use servlet as the component for the rest DSL // and we enable json binding mode restConfiguration().component(\"netty-http\").bindingMode(RestBindingMode.json) // and output using pretty print .dataFormatProperty(\"prettyPrint\", \"true\") // setup context path and port number that netty will use .contextPath(\"/\").port(8080) // add OpenApi api-doc out of the box .apiContextPath(\"/api-doc\") .apiProperty(\"api.title\", \"User API\").apiProperty(\"api.version\", \"1.2.3\") // and enable CORS .apiProperty(\"cors\", \"true\"); // this user REST service is json only rest(\"/user\").description(\"User rest service\") .consumes(\"application/json\").produces(\"application/json\") .get(\"/{id}\").description(\"Find user by id\").outType(User.class) .param().name(\"id\").type(path).description(\"The id of the user to get\").dataType(\"int\").endParam() .to(\"bean:userService?method=getUser(USD{header.id})\") .put().description(\"Updates or create a user\").type(User.class) .param().name(\"body\").type(body).description(\"The user to update or create\").endParam() .to(\"bean:userService?method=updateUser\") .get(\"/findAll\").description(\"Find all users\").outType(User[].class) .to(\"bean:userService?method=listUsers\"); } }",
"rest(\"/user\").tag(\"dude\").description(\"User rest service\") // setup security definitions .securityDefinitions() .oauth2(\"petstore_auth\").authorizationUrl(\"http://petstore.swagger.io/oauth/dialog\").end() .apiKey(\"api_key\").withHeader(\"myHeader\").end() .end() .consumes(\"application/json\").produces(\"application/json\")",
".get(\"/{id}/{date}\").description(\"Find user by id and date\").outType(User.class) .security(\"api_key\") .put().description(\"Updates or create a user\").type(User.class) .security(\"petstore_auth\", \"write:pets,read:pets\")",
"restConfiguration().component(\"netty-http\") .useXForwardHeaders(false) .apiProperty(\"schemes\", \"https\"); .host(\"localhost\") .port(8080);"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-openapi-java-starter
|
Chapter 3. Encryption and Key Management
|
Chapter 3. Encryption and Key Management The Red Hat Ceph Storage cluster typically resides in its own network security zone, especially when using a private storage cluster network. Important Security zone separation might be insufficient for protection if an attacker gains access to Ceph clients on the public network. There are situations where there is a security requirement to assure the confidentiality or integrity of network traffic, and where Red Hat Ceph Storage uses encryption and key management, including: SSH SSL Termination Messenger v2 protocol Encryption in Transit Encryption at Rest Key rotation 3.1. SSH All nodes in the Red Hat Ceph Storage cluster use SSH as part of deploying the cluster. This means that on each node: A cephadm user exists with password-less root privileges. The SSH service is enabled and by extension port 22 is open. A copy of the cephadm user's public SSH key is available. Important Any person with access to the cephadm user by extension has permission to run commands as root on any node in the Red Hat Ceph Storage cluster. Additional Resources See the How cephadm works section in the Red Hat Ceph Storage Installation Guide for more information. 3.2. SSL Termination The Ceph Object Gateway may be deployed in conjunction with HAProxy and keepalived for load balancing and failover. The object gateway Red Hat Ceph Storage versions 2 and 3 use Civetweb. Earlier versions of Civetweb do not support SSL and later versions support SSL with some performance limitations. The object gateway Red Hat Ceph Storage version 5 uses Beast. You can configure the Beast front-end web server to use the OpenSSL library to provide Transport Layer Security (TLS). When using HAProxy and keepalived to terminate SSL connections, the HAProxy and keepalived components use encryption keys. When using HAProxy and keepalived to terminate SSL, the connection between the load balancer and the Ceph Object Gateway is NOT encrypted. See Configuring SSL for Beast and HAProxy and keepalived for details. 3.3. Messenger v2 protocol The second version of Ceph's on-wire protocol, msgr2 , has the following features: A secure mode encrypting all data moving through the network. Encapsulation improvement of authentication payloads, enabling future integration of new authentication modes. Improvements to feature advertisement and negotiation. The Ceph daemons bind to multiple ports allowing both the legacy v1-compatible, and the new, v2-compatible Ceph clients to connect to the same storage cluster. Ceph clients or other Ceph daemons connecting to the Ceph Monitor daemon uses the v2 protocol first, if possible, but if not, then the legacy v1 protocol is used. By default, both messenger protocols, v1 and v2 , are enabled. The new v2 port is 3300, and the legacy v1 port is 6789, by default. The messenger v2 protocol has two configuration options that control whether the v1 or the v2 protocol is used: ms_bind_msgr1 - This option controls whether a daemon binds to a port speaking the v1 protocol; it is true by default. ms_bind_msgr2 - This option controls whether a daemon binds to a port speaking the v2 protocol; it is true by default. Similarly, two options control based on IPv4 and IPv6 addresses used: ms_bind_ipv4 - This option controls whether a daemon binds to an IPv4 address; it is true by default. ms_bind_ipv6 - This option controls whether a daemon binds to an IPv6 address; it is true by default. Note The ability to bind to multiple ports has paved the way for dual-stack IPv4 and IPv6 support. The msgr2 protocol supports two connection modes: crc Provides strong initial authentication when a connection is established with cephx . Provides a crc32c integrity check to protect against bit flips. Does not provide protection against a malicious man-in-the-middle attack. Does not prevent an eavesdropper from seeing all post-authentication traffic. secure Provides strong initial authentication when a connection is established with cephx . Provides full encryption of all post-authentication traffic. Provides a cryptographic integrity check. The default mode is crc . Ceph Object Gateway Encryption Also, the Ceph Object Gateway supports encryption with customer-provided keys using its S3 API. Important To comply with regulatory compliance standards requiring strict encryption in transit, administrators MUST deploy the Ceph Object Gateway with client-side encryption. Ceph Block Device Encryption System administrators integrating Ceph as a backend for Red Hat OpenStack Platform 13 MUST encrypt Ceph block device volumes using dm_crypt for RBD Cinder to ensure on-wire encryption within the Ceph storage cluster. Important To comply with regulatory compliance standards requiring strict encryption in transit, system administrators MUST use dmcrypt for RBD Cinder to ensure on-wire encryption within the Ceph storage cluster. Additional resources See the Connection mode configuration options in the Red Hat Ceph Storage Configuration Guide for more details. 3.4. Encryption in transit Starting with Red Hat Ceph Storage 5 and later, encryption for all Ceph traffic over the network is enabled by default, with the introduction of the messenger version 2 protocol. The secure mode setting for messenger v2 encrypts communication between Ceph daemons and Ceph clients, providing end-to-end encryption. You can check for encryption of the messenger v2 protocol with the ceph config dump command, netstat -Ip | grep ceph-osd command, or verify the Ceph daemon on the v2 ports. Additional resources See the SSL Termination for details on SSL termination. See the S3 server-side encryption for details on S3 API encryption. 3.5. Compression modes of messenger v2 protocol The messenger v2 protocol supports the compression feature. Compression is not enabled by default. Compressing and encrypting the same message is not recommended as the level of security of messages between peers is reduced. If encryption is enabled, a request to enable compression is ignored until the configuration option ms_osd_compress_mode is set to true . It supports two compression modes: force In multi-availability zones deployment, compresses replication messages between OSDs saves latency. In the public cloud, minimizes message size, thereby reducing network costs to cloud provider. Instances on public clouds with NVMe provides low network bandwidth relative to the device bandwidth. Does not provide protection against a malicious man-in-the-middle attack. none The messages are transmitted without compression. To ensure that compression of the message is enabled, run the debug_ms command and check some debug entries for connections. Also, you can run the ceph config get command to get details about the different configuration options for the network messages. Additional resources See the Compression mode configuration options in the Red Hat Ceph Storage Configuration Guide for more details. 3.6. Encryption at Rest Red Hat Ceph Storage supports encryption at rest in a few scenarios: Ceph Storage Cluster: The Ceph Storage Cluster supports Linux Unified Key Setup or LUKS encryption of Ceph OSDs and their corresponding journals, write-ahead logs, and metadata databases. In this scenario, Ceph will encrypt all data at rest irrespective of whether the client is a Ceph Block Device, Ceph Filesystem, or a custom application built on librados . Ceph Object Gateway: The Ceph storage cluster supports encryption of client objects. When the Ceph Object Gateway encrypts objects, they are encrypted independently of the Red Hat Ceph Storage cluster. Additionally, the data transmitted is between the Ceph Object Gateway and the Ceph Storage Cluster is in encrypted form. Ceph Storage Cluster Encryption The Ceph storage cluster supports encrypting data stored in Ceph OSDs. Red Hat Ceph Storage can encrypt logical volumes with lvm by specifying dmcrypt ; that is, lvm , invoked by ceph-volume , encrypts an OSD's logical volume, not its physical volume. It can encrypt non-LVM devices like partitions using the same OSD key. Encrypting logical volumes allows for more configuration flexibility. Ceph uses LUKS v1 rather than LUKS v2, because LUKS v1 has the broadest support among Linux distributions. When creating an OSD, lvm will generate a secret key and pass the key to the Ceph Monitors securely in a JSON payload via stdin . The attribute name for the encryption key is dmcrypt_key . Important System administrators must explicitly enable encryption. By default, Ceph does not encrypt data stored in Ceph OSDs. System administrators must enable dmcrypt to encrypt data stored in Ceph OSDs. When using a Ceph Orchestrator service specification file for adding Ceph OSDs to the storage cluster, set the following option in the file to encrypt Ceph OSDs: Example Note LUKS and dmcrypt only address encryption for data at rest, not encryption for data in transit. Ceph Object Gateway Encryption The Ceph Object Gateway supports encryption with customer-provided keys using its S3 API. When using customer-provided keys, the S3 client passes an encryption key along with each request to read or write encrypted data. It is the customer's responsibility to manage those keys. Customers must remember which key the Ceph Object Gateway used to encrypt each object. Additional Resources See S3 API server-side encryption in the Red Hat Ceph Storage Developer Guide for details. 3.7. Enabling key rotation Ceph and Ceph Object Gateway daemons within the Ceph cluster have a secret key. This key is used to connect to and authenticate with the cluster. You can update an active security key within an active Ceph cluster with minimal service interruption, by using the key rotation feature. Note The active Ceph cluster includes nodes within the Ceph client role with parallel key changes. Key rotation helps ensure that current industry and security compliance requirements are met. Prerequisites A running Red Hat Ceph Storage cluster. User with admin privileges. Procedure Rotate the key: Syntax Example If using a daemon other than MDS, OSD, or MGR, restart the daemon to switch to the new key. MDS, OSD, and MGR daemons do not require daemon restart. Syntax Example
|
[
"encrypted: true",
"ceph orch daemon rotate-key NAME",
"ceph orch daemon rotate-key mgr.ceph-key-host01 Scheduled to rotate-key mgr.ceph-key-host01 on host 'my-host-host01-installer'",
"ceph orch restart SERVICE_TYPE",
"ceph orch restart rgw"
] |
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/data_security_and_hardening_guide/assembly-encryption-and-key-management
|
Chapter 7. Accessing a ROSA cluster
|
Chapter 7. Accessing a ROSA cluster It is recommended that you access your Red Hat OpenShift Service on AWS (ROSA) cluster using an identity provider (IDP) account. However, the cluster administrator who created the cluster can access it using the quick access procedure. This document describes how to access a cluster and set up an IDP using the ROSA CLI ( rosa ). Alternatively, you can create an IDP account using OpenShift Cluster Manager console. 7.1. Accessing your cluster quickly You can use this quick access procedure to log in to your cluster. Note As a best practice, access your cluster with an IDP account instead. Procedure Enter the following command: USD rosa create admin --cluster=<cluster_name> Example output W: It is recommended to add an identity provider to login to this cluster. See 'rosa create idp --help' for more information. I: Admin account has been added to cluster 'cluster_name'. It may take up to a minute for the account to become active. I: To login, run the following command: oc login https://api.cluster-name.t6k4.i1.organization.org:6443 \ 1 --username cluster-admin \ --password FWGYL-2mkJI-3ZTTZ-rINns 1 For a Red Hat OpenShift Service on AWS (ROSA) with hosted control planes (HCP) cluster, the port number should be 443 . Enter the oc login command, username, and password from the output of the command: Example output USD oc login https://api.cluster_name.t6k4.i1.organization.org:6443 \ 1 > --username cluster-admin \ > --password FWGYL-2mkJI-3ZTTZ-rINns Login successful. You have access to 77 projects, the list has been suppressed. You can list all projects with 'projects' 1 For a ROSA with HCP cluster, the port number should be 443 . Using the default project, enter this oc command to verify that the cluster administrator access is created: USD oc whoami Example output cluster-admin 7.2. Accessing your cluster with an IDP account To log in to your cluster, you can configure an identity provider (IDP). This procedure uses GitHub as an example IDP. To view other supported IDPs, run the rosa create idp --help command. Note Alternatively, as the user who created the cluster, you can use the quick access procedure. Procedure To access your cluster using an IDP account: Add an IDP. The following command creates an IDP backed by GitHub. After running the command, follow the interactive prompts from the output to access your GitHub developer settings and configure a new OAuth application. USD rosa create idp --cluster=<cluster_name> --interactive Enter the following values: Type of identity provider: github Restrict to members of: organizations (if you do not have a GitHub Organization, you can create one now) GitHub organizations: rh-test-org (enter the name of your organization) Example output I: Interactive mode enabled. Any optional fields can be left empty and a default will be selected. ? Type of identity provider: github ? Restrict to members of: organizations ? GitHub organizations: rh-test-org ? To use GitHub as an identity provider, you must first register the application: - Open the following URL: https://github.com/organizations/rh-rosa-test-cluster/settings/applications/new?oauth_application%5Bcallback_url%5D=https%3A%2F%2Foauth-openshift.apps.rh-rosa-test-cluster.z7v0.s1.devshift.org%2Foauth2callback%2Fgithub-1&oauth_application%5Bname%5D=rh-rosa-test-cluster-stage&oauth_application%5Burl%5D=https%3A%2F%2Fconsole-openshift-console.apps.rh-rosa-test-cluster.z7v0.s1.devshift.org - Click on 'Register application' ... Follow the URL in the output and select Register application to register a new OAuth application in your GitHub organization. By registering the application, you enable the OAuth server that is built into ROSA to authenticate members of your GitHub organization into your cluster. Note The fields in the Register a new OAuth application GitHub form are automatically filled with the required values through the URL that is defined by the Red Hat OpenShift Service on AWS (ROSA) CLI, rosa . Use the information from the GitHub application you created and continue the prompts. Enter the following values: Client ID: <my_github_client_id> Client Secret: [? for help] <my_github_client_secret> Hostname: (optional, you can leave it blank for now) Mapping method: claim Continued example output ... ? Client ID: <my_github_client_id> ? Client Secret: [? for help] <my_github_client_secret> ? Hostname: ? Mapping method: claim I: Configuring IDP for cluster 'rh_rosa_test_cluster' I: Identity Provider 'github-1' has been created. You need to ensure that there is a list of cluster administrators defined. See 'rosa create user --help' for more information. To login into the console, open https://console-openshift-console.apps.rh-test-org.z7v0.s1.devshift.org and click on github-1 The IDP can take 1-2 minutes to be configured within your cluster. Enter the following command to verify that your IDP has been configured correctly: USD rosa list idps --cluster=<cluster_name> Example output NAME TYPE AUTH URL github-1 GitHub https://oauth-openshift.apps.rh-rosa-test-cluster1.j9n4.s1.devshift.org/oauth2callback/github-1 Log in to your cluster. Enter the following command to get the Console URL of your cluster: USD rosa describe cluster --cluster=<cluster_name> Example output Name: rh-rosa-test-cluster1 ID: 1de87g7c30g75qechgh7l5b2bha6r04e External ID: 34322be7-b2a7-45c2-af39-2c684ce624e1 API URL: https://api.rh-rosa-test-cluster1.j9n4.s1.devshift.org:6443 1 Console URL: https://console-openshift-console.apps.rh-rosa-test-cluster1.j9n4.s1.devshift.org Nodes: Master: 3, Infra: 3, Compute: 4 Region: us-east-2 State: ready Created: May 27, 2020 1 For a Red Hat OpenShift Service on AWS (ROSA) with hosted control planes (HCP) cluster, the port number should be 443 . Navigate to the Console URL , and log in using your Github credentials. In the top right of the OpenShift console, click your name and click Copy Login Command . Select the name of the IDP you added (in our case github-1 ), and click Display Token . Copy and paste the oc login command into your terminal. USD oc login --token=z3sgOGVDk0k4vbqo_wFqBQQTnT-nA-nQLb8XEmWnw4X --server=https://api.rh-rosa-test-cluster1.j9n4.s1.devshift.org:6443 1 1 For a ROSA with HCP cluster, use the port number 443 . Example output Logged into "https://api.rh-rosa-cluster1.j9n4.s1.devshift.org:6443" as "rh-rosa-test-user" using the token provided. 1 You have access to 67 projects, the list has been suppressed. You can list all projects with 'oc projects' Using project "default". 1 For a ROSA with HCP cluster, the port number should be 443 . Enter a simple oc command to verify everything is setup properly and that you are logged in. USD oc version Example output Client Version: 4.4.0-202005231254-4a4cd75 Server Version: 4.3.18 Kubernetes Version: v1.16.2 7.3. Granting cluster-admin access As the user who created the cluster, add the cluster-admin user role to your account to have the maximum administrator privileges. These privileges are not automatically assigned to your user account when you create the cluster. Additionally, only the user who created the cluster can grant cluster access to other cluster-admin or dedicated-admin users. Users with dedicated-admin access have fewer privileges. As a best practice, limit the number of cluster-admin users to as few as possible. Prerequisites You have added an identity provider (IDP) to your cluster. You have the IDP user name for the user you are creating. You are logged in to the cluster. Procedure Give your user cluster-admin privileges: USD rosa grant user cluster-admin --user=<idp_user_name> --cluster=<cluster_name> Verify your user is listed as a cluster administrator: USD rosa list users --cluster=<cluster_name> Example output GROUP NAME cluster-admins rh-rosa-test-user dedicated-admins rh-rosa-test-user Enter the following command to verify that your user now has cluster-admin access. A cluster administrator can run this command without errors, but a dedicated administrator cannot. USD oc get all -n openshift-apiserver Example output NAME READY STATUS RESTARTS AGE pod/apiserver-6ndg2 1/1 Running 0 17h pod/apiserver-lrmxs 1/1 Running 0 17h pod/apiserver-tsqhz 1/1 Running 0 17h NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/api ClusterIP 172.30.23.241 <none> 443/TCP 18h NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/apiserver 3 3 3 3 3 node-role.kubernetes.io/master= 18h Additional resources Cluster administration role 7.4. Granting dedicated-admin access Only the user who created the cluster can grant cluster access to other cluster-admin or dedicated-admin users. Users with dedicated-admin access have fewer privileges. As a best practice, grant dedicated-admin access to most of your administrators. Prerequisites You have added an identity provider (IDP) to your cluster. You have the IDP user name for the user you are creating. You are logged in to the cluster. Procedure Enter the following command to promote your user to a dedicated-admin : USD rosa grant user dedicated-admin --user=<idp_user_name> --cluster=<cluster_name> Enter the following command to verify that your user now has dedicated-admin access: USD oc get groups dedicated-admins Example output NAME USERS dedicated-admins rh-rosa-test-user Note A Forbidden error displays if user without dedicated-admin privileges runs this command. Additional resources Customer administrator user 7.5. Additional resources Configuring identity providers using Red Hat OpenShift Cluster Manager console Understanding the ROSA with STS deployment workflow Adding notification contacts
|
[
"rosa create admin --cluster=<cluster_name>",
"W: It is recommended to add an identity provider to login to this cluster. See 'rosa create idp --help' for more information. I: Admin account has been added to cluster 'cluster_name'. It may take up to a minute for the account to become active. I: To login, run the following command: login https://api.cluster-name.t6k4.i1.organization.org:6443 \\ 1 --username cluster-admin --password FWGYL-2mkJI-3ZTTZ-rINns",
"oc login https://api.cluster_name.t6k4.i1.organization.org:6443 \\ 1 > --username cluster-admin > --password FWGYL-2mkJI-3ZTTZ-rINns Login successful. You have access to 77 projects, the list has been suppressed. You can list all projects with 'projects'",
"oc whoami",
"cluster-admin",
"rosa create idp --cluster=<cluster_name> --interactive",
"I: Interactive mode enabled. Any optional fields can be left empty and a default will be selected. ? Type of identity provider: github ? Restrict to members of: organizations ? GitHub organizations: rh-test-org ? To use GitHub as an identity provider, you must first register the application: - Open the following URL: https://github.com/organizations/rh-rosa-test-cluster/settings/applications/new?oauth_application%5Bcallback_url%5D=https%3A%2F%2Foauth-openshift.apps.rh-rosa-test-cluster.z7v0.s1.devshift.org%2Foauth2callback%2Fgithub-1&oauth_application%5Bname%5D=rh-rosa-test-cluster-stage&oauth_application%5Burl%5D=https%3A%2F%2Fconsole-openshift-console.apps.rh-rosa-test-cluster.z7v0.s1.devshift.org - Click on 'Register application'",
"? Client ID: <my_github_client_id> ? Client Secret: [? for help] <my_github_client_secret> ? Hostname: ? Mapping method: claim I: Configuring IDP for cluster 'rh_rosa_test_cluster' I: Identity Provider 'github-1' has been created. You need to ensure that there is a list of cluster administrators defined. See 'rosa create user --help' for more information. To login into the console, open https://console-openshift-console.apps.rh-test-org.z7v0.s1.devshift.org and click on github-1",
"rosa list idps --cluster=<cluster_name>",
"NAME TYPE AUTH URL github-1 GitHub https://oauth-openshift.apps.rh-rosa-test-cluster1.j9n4.s1.devshift.org/oauth2callback/github-1",
"rosa describe cluster --cluster=<cluster_name>",
"Name: rh-rosa-test-cluster1 ID: 1de87g7c30g75qechgh7l5b2bha6r04e External ID: 34322be7-b2a7-45c2-af39-2c684ce624e1 API URL: https://api.rh-rosa-test-cluster1.j9n4.s1.devshift.org:6443 1 Console URL: https://console-openshift-console.apps.rh-rosa-test-cluster1.j9n4.s1.devshift.org Nodes: Master: 3, Infra: 3, Compute: 4 Region: us-east-2 State: ready Created: May 27, 2020",
"oc login --token=z3sgOGVDk0k4vbqo_wFqBQQTnT-nA-nQLb8XEmWnw4X --server=https://api.rh-rosa-test-cluster1.j9n4.s1.devshift.org:6443 1",
"Logged into \"https://api.rh-rosa-cluster1.j9n4.s1.devshift.org:6443\" as \"rh-rosa-test-user\" using the token provided. 1 You have access to 67 projects, the list has been suppressed. You can list all projects with 'oc projects' Using project \"default\".",
"oc version",
"Client Version: 4.4.0-202005231254-4a4cd75 Server Version: 4.3.18 Kubernetes Version: v1.16.2",
"rosa grant user cluster-admin --user=<idp_user_name> --cluster=<cluster_name>",
"rosa list users --cluster=<cluster_name>",
"GROUP NAME cluster-admins rh-rosa-test-user dedicated-admins rh-rosa-test-user",
"oc get all -n openshift-apiserver",
"NAME READY STATUS RESTARTS AGE pod/apiserver-6ndg2 1/1 Running 0 17h pod/apiserver-lrmxs 1/1 Running 0 17h pod/apiserver-tsqhz 1/1 Running 0 17h NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/api ClusterIP 172.30.23.241 <none> 443/TCP 18h NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/apiserver 3 3 3 3 3 node-role.kubernetes.io/master= 18h",
"rosa grant user dedicated-admin --user=<idp_user_name> --cluster=<cluster_name>",
"oc get groups dedicated-admins",
"NAME USERS dedicated-admins rh-rosa-test-user"
] |
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/install_rosa_classic_clusters/rosa-sts-accessing-cluster
|
Chapter 2. Using project workbenches
|
Chapter 2. Using project workbenches 2.1. Creating a workbench and selecting an IDE A workbench is an isolated area where you can examine and work with ML models. You can also work with data and run programs, for example to prepare and clean data. While a workbench is not required if, for example, you only want to service an existing model, one is needed for most data science workflow tasks, such as writing code to process data or training a model. When you create a workbench, you specify an image (an IDE, packages, and other dependencies). Supported IDEs include JupyterLab, code-server, and RStudio (Technology Preview). The IDEs are based on a server-client architecture. Each IDE provides a server that runs in a container on the OpenShift cluster, while the user interface (the client) is displayed in your web browser. For example, the Jupyter notebook server runs in a container on the Red Hat OpenShift cluster. The client is the JupyterLab interface that opens in your web browser on your local computer. All of the commands that you enter in JupyterLab are executed by the notebook server. Similarly, other IDEs like code-server or RStudio Server provide a server that runs in a container on the OpenShift cluster, while the user interface is displayed in your web browser. This architecture allows you to interact through your local computer in a browser environment, while all processing occurs on the cluster. The cluster provides the benefits of larger available resources and security because the data being processed never leaves the cluster. In a workbench, you can also configure connections (to access external data for training models and to save models so that you can deploy them) and cluster storage (for persisting data). Workbenches within the same project can share models and data through object storage with the data science pipelines and model servers. For data science projects that require data retention, you can add container storage to the workbench you are creating. Within a project, you can create multiple workbenches. When to create a new workbench depends on considerations, such as the following: The workbench configuration (for example, CPU, RAM, or IDE). If you want to avoid editing the configuration of an existing workbench's configuration to accommodate a new task, you can create a new workbench instead. Separation of tasks or activities. For example, you might want to use one workbench for your Large Language Models (LLM) experimentation activities, another workbench dedicated to a demo, and another workbench for testing. 2.1.1. About workbench images A workbench image (sometimes referred to as a notebook image) is optimized with the tools and libraries that you need for model development. You can use the provided workbench images or an OpenShift AI administrator can create custom workbench images adapted to your needs. To provide a consistent, stable platform for your model development, many provided workbench images contain the same version of Python. Most workbench images available on OpenShift AI are pre-built and ready for you to use immediately after OpenShift AI is installed or upgraded. For information about Red Hat support of workbench images and packages, see Red Hat OpenShift AI: Supported Configurations . The following table lists the workbench images that are installed with Red Hat OpenShift AI by default. If the preinstalled packages that are provided in these images are not sufficient for your use case, you have the following options: Install additional libraries after launching a default image. This option is good if you want to add libraries on an ad hoc basis as you develop models. However, it can be challenging to manage the dependencies of installed libraries and your changes are not saved when the workbench restarts. Create a custom image that includes the additional libraries or packages. For more information, see Creating custom workbench images . Important Workbench images denoted with (Technology Preview) in this table are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using Technology Preview features in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Table 2.1. Default workbench images Image name Description CUDA If you are working with compute-intensive data science models that require GPU support, use the Compute Unified Device Architecture (CUDA) workbench image to gain access to the NVIDIA CUDA Toolkit. Using this toolkit, you can optimize your work by using GPU-accelerated libraries and optimization tools. Standard Data Science Use the Standard Data Science workbench image for models that do not require TensorFlow or PyTorch. This image contains commonly-used libraries to assist you in developing your machine learning models. TensorFlow TensorFlow is an open source platform for machine learning. With TensorFlow, you can build, train and deploy your machine learning models. TensorFlow contains advanced data visualization features, such as computational graph visualizations. It also allows you to easily monitor and track the progress of your models. PyTorch PyTorch is an open source machine learning library optimized for deep learning. If you are working with computer vision or natural language processing models, use the Pytorch workbench image. Minimal Python If you do not require advanced machine learning features, or additional resources for compute-intensive data science work, you can use the Minimal Python image to develop your models. TrustyAI Use the TrustyAI workbench image to leverage your data science work with model explainability, tracing, and accountability, and runtime monitoring. See the TrustyAI Explainability repository for some example Jupyter notebooks. code-server With the code-server workbench image, you can customize your workbench environment to meet your needs using a variety of extensions to add new languages, themes, debuggers, and connect to additional services. Enhance the efficiency of your data science work with syntax highlighting, auto-indentation, and bracket matching, as well as an automatic task runner for seamless automation. For more information, see code-server in GitHub . NOTE: Elyra-based pipelines are not available with the code-server workbench image. RStudio Server (Technology preview) Use the RStudio Server workbench image to access the RStudio IDE, an integrated development environment for R, a programming language for statistical computing and graphics. For more information, see the RStudio Server site . To use the RStudio Server workbench image, you must first build it by creating a secret and triggering the BuildConfig, and then enable it in the OpenShift AI UI by editing the rstudio-rhel9 image stream. For more information, see Building the RStudio Server workbench images . Important Disclaimer: Red Hat supports managing workbenches in OpenShift AI. However, Red Hat does not provide support for the RStudio software. RStudio Server is available through https://rstudio.org/ and is subject to RStudio licensing terms. Review the licensing terms before you use this sample workbench. CUDA - RStudio Server (Technology Preview) Use the CUDA - RStudio Server workbench image to access the RStudio IDE and NVIDIA CUDA Toolkit. RStudio is an integrated development environment for R, a programming language for statistical computing and graphics. With the NVIDIA CUDA toolkit, you can optimize your work using GPU-accelerated libraries and optimization tools. For more information, see the RStudio Server site . To use the CUDA - RStudio Server workbench image, you must first build it by creating a secret and triggering the BuildConfig, and then enable it in the OpenShift AI UI by editing the cuda-rstudio-rhel9 image stream. For more information, see Building the RStudio Server workbench images . Important Disclaimer: Red Hat supports managing workbenches in OpenShift AI. However, Red Hat does not provide support for the RStudio software. RStudio Server is available through https://rstudio.org/ and is subject to RStudio licensing terms. Review the licensing terms before you use this sample workbench. The CUDA - RStudio Server workbench image contains NVIDIA CUDA technology. CUDA licensing information is available at https://docs.nvidia.com/cuda/ . Review the licensing terms before you use this sample workbench. ROCm Use the ROCm notebook image to run AI and machine learning workloads on AMD GPUs in OpenShift AI. It includes ROCm libraries and tools optimized for high-performance GPU acceleration, supporting custom AI workflows and data processing tasks. Use this image integrating additional frameworks or dependencies tailored to your specific AI development needs. ROCm-PyTorch Use the ROCm-PyTorch notebook image to optimize PyTorch workloads on AMD GPUs in OpenShift AI. It includes ROCm-accelerated PyTorch libraries, enabling efficient deep learning training, inference, and experimentation. This image is designed for data scientists working with PyTorch-based workflows, offering integration with GPU scheduling. ROCm-TensorFlow Use the ROCm-TensorFlow notebook image to optimize TensorFlow workloads on AMD GPUs in OpenShift AI. It includes ROCm-accelerated TensorFlow libraries to support high-performance deep learning model training and inference. This image simplifies TensorFlow development on AMD GPUs and integrates with OpenShift AI for resource scaling and management. 2.1.2. Creating a workbench When you create a workbench, you specify an image (an IDE, packages, and other dependencies). You can also configure connections, cluster storage, and add container storage. Prerequisites You have logged in to Red Hat OpenShift AI. If you use OpenShift AI groups, you are part of the user group or admin group (for example, rhoai-users or rhoai-admins ) in OpenShift. You created a project. If you created a Simple Storage Service (S3) account outside of Red Hat OpenShift AI and you want to create connections to your existing S3 storage buckets, you have the following credential information for the storage buckets: Endpoint URL Access key Secret key Region Bucket name For more information, see Working with data in an S3-compatible object store . Procedure From the OpenShift AI dashboard, click Data Science Projects . The Data Science Projects page opens. Click the name of the project that you want to add the workbench to. A project details page opens. Click the Workbenches tab. Click Create workbench . The Create workbench page opens. In the Name field, enter a unique name for your workbench. Optional: If you want to change the default resource name for your workbench, click Edit resource name . The resource name is what your resource is labeled in OpenShift. Valid characters include lowercase letters, numbers, and hyphens (-). The resource name cannot exceed 30 characters, and it must start with a letter and end with a letter or number. Note: You cannot change the resource name after the workbench is created. You can edit only the display name and the description. Optional: In the Description field, enter a description for your workbench. In the Notebook image section, complete the fields to specify the workbench image to use with your workbench. From the Image selection list, select a workbench image that suits your use case. A workbench image includes an IDE and Python packages (reusable code). Optionally, click View package information to view a list of packages that are included in the image that you selected. If the workbench image has multiple versions available, select the workbench image version to use from the Version selection list. To use the latest package versions, Red Hat recommends that you use the most recently added image. Note You can change the workbench image after you create the workbench. In the Deployment size section, from the Container size list, select a container size for your server. The container size specifies the number of CPUs and the amount of memory allocated to the container, setting the guaranteed minimum (request) and maximum (limit) for both. Optional: In the Environment variables section, select and specify values for any environment variables. Setting environment variables during the workbench configuration helps you save time later because you do not need to define them in the body of your notebooks, or with the IDE command line interface. If you are using S3-compatible storage, add these recommended environment variables: AWS_ACCESS_KEY_ID specifies your Access Key ID for Amazon Web Services. AWS_SECRET_ACCESS_KEY specifies your Secret access key for the account specified in AWS_ACCESS_KEY_ID . OpenShift AI stores the credentials as Kubernetes secrets in a protected namespace if you select Secret when you add the variable. In the Cluster storage section, configure the storage for your workbench. Select one of the following options: Create new persistent storage to create storage that is retained after you shut down your workbench. Complete the relevant fields to define the storage: Enter a name for the cluster storage. Enter a description for the cluster storage. Select a storage class for the cluster storage. Note You cannot change the storage class after you add the cluster storage to the workbench. Under Persistent storage size , enter a new size in gibibytes or mebibytes. Use existing persistent storage to reuse existing storage and select the storage from the Persistent storage list. Optional: You can add a connection to your workbench. A connection is a resource that contains the configuration parameters needed to connect to a data source or sink, such as an object storage bucket. You can use storage buckets for storing data, models, and pipeline artifacts. You can also use a connection to specify the location of a model that you want to deploy. In the Connections section, use an existing connection or create a new connection: Use an existing connection as follows: Click Attach existing connections . From the Connection list, select a connection that you previously defined. Create a new connection as follows: Click Create connection . The Add connection dialog appears. From the Connection type drop-down list, select the type of connection. The Connection details section appears. If you selected S3 compatible object storage in the preceding step, configure the connection details: In the Connection name field, enter a unique name for the connection. Optional: In the Description field, enter a description for the connection. In the Access key field, enter the access key ID for the S3-compatible object storage provider. In the Secret key field, enter the secret access key for the S3-compatible object storage account that you specified. In the Endpoint field, enter the endpoint of your S3-compatible object storage bucket. In the Region field, enter the default region of your S3-compatible object storage account. In the Bucket field, enter the name of your S3-compatible object storage bucket. Click Create . If you selected URI in the preceding step, configure the connection details: In the Connection name field, enter a unique name for the connection. Optional: In the Description field, enter a description for the connection. In the URI field, enter the Uniform Resource Identifier (URI). Click Create . Click Create workbench . Verification The workbench that you created appears on the Workbenches tab for the project. Any cluster storage that you associated with the workbench during the creation process appears on the Cluster storage tab for the project. The Status column on the Workbenches tab displays a status of Starting when the workbench server is starting, and Running when the workbench has successfully started. Optional: Click the Open link to open the IDE in a new window. 2.2. Starting a workbench You can manually start a data science project's workbench from the Workbenches tab on the project details page. By default, workbenches start immediately after you create them. Prerequisites You have logged in to Red Hat OpenShift AI. If you are using OpenShift AI groups, you are part of the user group or admin group (for example, rhoai-users or rhoai-admins ) in OpenShift. You have created a data science project that contains a workbench. Procedure From the OpenShift AI dashboard, click Data Science Projects . The Data Science Projects page opens. Click the name of the project whose workbench you want to start. A project details page opens. Click the Workbenches tab. In the Status column for the workbench that you want to start, click Start . The Status column changes from Stopped to Starting when the workbench server is starting, and then to Running when the workbench has successfully started. * Optional: Click the Open link to open the IDE in a new window. Verification The workbench that you started appears on the Workbenches tab for the project, with the status of Running . 2.3. Updating a project workbench If your data science work requires you to change your workbench's notebook image, container size, or identifying information, you can update the properties of your project's workbench. If you require extra power for use with large datasets, you can assign accelerators to your workbench to optimize performance. Prerequisites You have logged in to Red Hat OpenShift AI. If you use OpenShift AI groups, you are part of the user group or admin group (for example, rhoai-users or rhoai-admins ) in OpenShift. You have created a data science project that has a workbench. Procedure From the OpenShift AI dashboard, click Data Science Projects . The Data Science Projects page opens. Click the name of the project whose workbench you want to update. A project details page opens. Click the Workbenches tab. Click the action menu ( ... ) beside the workbench that you want to update and then click Edit workbench . The Edit workbench page opens. Update any of the workbench properties and then click Update workbench . Verification The workbench that you updated appears on the Workbenches tab for the project. 2.4. Deleting a workbench from a data science project You can delete workbenches from your data science projects to help you remove Jupyter notebooks that are no longer relevant to your work. Prerequisites You have logged in to Red Hat OpenShift AI. If you are using OpenShift AI groups, you are part of the user group or admin group (for example, rhoai-users or rhoai-admins ) in OpenShift. You have created a data science project with a workbench. Procedure From the OpenShift AI dashboard, click Data Science Projects . The Data Science Projects page opens. Click the name of the project that you want to delete the workbench from. A project details page opens. Click the Workbenches tab. Click the action menu ( ... ) beside the workbench that you want to delete and then click Delete workbench . The Delete workbench dialog opens. Enter the name of the workbench in the text field to confirm that you intend to delete it. Click Delete workbench . Verification The workbench that you deleted is no longer displayed on the Workbenches tab for the project. The custom resource (CR) associated with the workbench's Jupyter notebook is deleted.
| null |
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2.18/html/working_on_data_science_projects/using-project-workbenches_projects
|
Configuring Red Hat OpenStack Platform networking
|
Configuring Red Hat OpenStack Platform networking Red Hat OpenStack Platform 17.1 Managing the OpenStack Networking service (neutron) OpenStack Documentation Team [email protected]
| null |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/configuring_red_hat_openstack_platform_networking/index
|
Chapter 1. Architectures
|
Chapter 1. Architectures Red Hat Enterprise Linux 7 is available as a single kit on the following architectures [1] : 64-bit AMD 64-bit Intel IBM POWER7 IBM System z [2] In this release, Red Hat brings together improvements for servers and systems, as well as for the overall Red Hat open source experience. [1] Note that the Red Hat Enterprise Linux 7 installation is only supported on 64-bit hardware. Red Hat Enterprise Linux 7 is able to run 32-bit operating systems, including versions of Red Hat Enterprise Linux, as virtual machines. [2] Note that Red Hat Enterprise Linux 7 supports IBM zEnterprise 196 hardware or later; IBM System z10 mainframe systems are no longer supported and will not boot Red Hat Enterprise Linux 7.
| null |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.0_release_notes/chap-red_hat_enterprise_linux-7.0_release_notes-architectures
|
Chapter 6. Ingress Operator in OpenShift Container Platform
|
Chapter 6. Ingress Operator in OpenShift Container Platform 6.1. OpenShift Container Platform Ingress Operator When you create your OpenShift Container Platform cluster, pods and services running on the cluster are each allocated their own IP addresses. The IP addresses are accessible to other pods and services running nearby but are not accessible to outside clients. The Ingress Operator implements the IngressController API and is the component responsible for enabling external access to OpenShift Container Platform cluster services. The Ingress Operator makes it possible for external clients to access your service by deploying and managing one or more HAProxy-based Ingress Controllers to handle routing. You can use the Ingress Operator to route traffic by specifying OpenShift Container Platform Route and Kubernetes Ingress resources. Configurations within the Ingress Controller, such as the ability to define endpointPublishingStrategy type and internal load balancing, provide ways to publish Ingress Controller endpoints. 6.2. The Ingress configuration asset The installation program generates an asset with an Ingress resource in the config.openshift.io API group, cluster-ingress-02-config.yml . YAML Definition of the Ingress resource apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: domain: apps.openshiftdemos.com The installation program stores this asset in the cluster-ingress-02-config.yml file in the manifests/ directory. This Ingress resource defines the cluster-wide configuration for Ingress. This Ingress configuration is used as follows: The Ingress Operator uses the domain from the cluster Ingress configuration as the domain for the default Ingress Controller. The OpenShift API Server Operator uses the domain from the cluster Ingress configuration. This domain is also used when generating a default host for a Route resource that does not specify an explicit host. 6.3. Ingress Controller configuration parameters The ingresscontrollers.operator.openshift.io resource offers the following configuration parameters. Parameter Description domain domain is a DNS name serviced by the Ingress Controller and is used to configure multiple features: For the LoadBalancerService endpoint publishing strategy, domain is used to configure DNS records. See endpointPublishingStrategy . When using a generated default certificate, the certificate is valid for domain and its subdomains . See defaultCertificate . The value is published to individual Route statuses so that users know where to target external DNS records. The domain value must be unique among all Ingress Controllers and cannot be updated. If empty, the default value is ingress.config.openshift.io/cluster .spec.domain . replicas replicas is the desired number of Ingress Controller replicas. If not set, the default value is 2 . endpointPublishingStrategy endpointPublishingStrategy is used to publish the Ingress Controller endpoints to other networks, enable load balancer integrations, and provide access to other systems. If not set, the default value is based on infrastructure.config.openshift.io/cluster .status.platform : AWS: LoadBalancerService (with External scope) Azure: LoadBalancerService (with External scope) GCP: LoadBalancerService (with External scope) Bare metal: NodePortService Other: HostNetwork Note On Red Hat OpenStack Platform (RHOSP), the LoadBalancerService endpoint publishing strategy is only supported if a cloud provider is configured to create health monitors. For RHOSP 16.1 and 16.2, this strategy is only possible if you use the Amphora Octavia provider. For more information, see the "Setting cloud provider options" section of the RHOSP installation documentation. For most platforms, the endpointPublishingStrategy value can be updated. On GCP, you can configure the following endpointPublishingStrategy fields: loadBalancer.scope loadbalancer.providerParameters.gcp.clientAccess hostNetwork.protocol nodePort.protocol defaultCertificate The defaultCertificate value is a reference to a secret that contains the default certificate that is served by the Ingress Controller. When Routes do not specify their own certificate, defaultCertificate is used. The secret must contain the following keys and data: * tls.crt : certificate file contents * tls.key : key file contents If not set, a wildcard certificate is automatically generated and used. The certificate is valid for the Ingress Controller domain and subdomains , and the generated certificate's CA is automatically integrated with the cluster's trust store. The in-use certificate, whether generated or user-specified, is automatically integrated with OpenShift Container Platform built-in OAuth server. namespaceSelector namespaceSelector is used to filter the set of namespaces serviced by the Ingress Controller. This is useful for implementing shards. routeSelector routeSelector is used to filter the set of Routes serviced by the Ingress Controller. This is useful for implementing shards. nodePlacement nodePlacement enables explicit control over the scheduling of the Ingress Controller. If not set, the defaults values are used. Note The nodePlacement parameter includes two parts, nodeSelector and tolerations . For example: nodePlacement: nodeSelector: matchLabels: kubernetes.io/os: linux tolerations: - effect: NoSchedule operator: Exists tlsSecurityProfile tlsSecurityProfile specifies settings for TLS connections for Ingress Controllers. If not set, the default value is based on the apiservers.config.openshift.io/cluster resource. When using the Old , Intermediate , and Modern profile types, the effective profile configuration is subject to change between releases. For example, given a specification to use the Intermediate profile deployed on release X.Y.Z , an upgrade to release X.Y.Z+1 may cause a new profile configuration to be applied to the Ingress Controller, resulting in a rollout. The minimum TLS version for Ingress Controllers is 1.1 , and the maximum TLS version is 1.3 . Note Ciphers and the minimum TLS version of the configured security profile are reflected in the TLSProfile status. Important The Ingress Operator converts the TLS 1.0 of an Old or Custom profile to 1.1 . clientTLS clientTLS authenticates client access to the cluster and services; as a result, mutual TLS authentication is enabled. If not set, then client TLS is not enabled. clientTLS has the required subfields, spec.clientTLS.clientCertificatePolicy and spec.clientTLS.ClientCA . The ClientCertificatePolicy subfield accepts one of the two values: Required or Optional . The ClientCA subfield specifies a config map that is in the openshift-config namespace. The config map should contain a CA certificate bundle. The AllowedSubjectPatterns is an optional value that specifies a list of regular expressions, which are matched against the distinguished name on a valid client certificate to filter requests. The regular expressions must use PCRE syntax. At least one pattern must match a client certificate's distinguished name; otherwise, the Ingress Controller rejects the certificate and denies the connection. If not specified, the Ingress Controller does not reject certificates based on the distinguished name. routeAdmission routeAdmission defines a policy for handling new route claims, such as allowing or denying claims across namespaces. namespaceOwnership describes how hostname claims across namespaces should be handled. The default is Strict . Strict : does not allow routes to claim the same hostname across namespaces. InterNamespaceAllowed : allows routes to claim different paths of the same hostname across namespaces. wildcardPolicy describes how routes with wildcard policies are handled by the Ingress Controller. WildcardsAllowed : Indicates routes with any wildcard policy are admitted by the Ingress Controller. WildcardsDisallowed : Indicates only routes with a wildcard policy of None are admitted by the Ingress Controller. Updating wildcardPolicy from WildcardsAllowed to WildcardsDisallowed causes admitted routes with a wildcard policy of Subdomain to stop working. These routes must be recreated to a wildcard policy of None to be readmitted by the Ingress Controller. WildcardsDisallowed is the default setting. IngressControllerLogging logging defines parameters for what is logged where. If this field is empty, operational logs are enabled but access logs are disabled. access describes how client requests are logged. If this field is empty, access logging is disabled. destination describes a destination for log messages. type is the type of destination for logs: Container specifies that logs should go to a sidecar container. The Ingress Operator configures the container, named logs , on the Ingress Controller pod and configures the Ingress Controller to write logs to the container. The expectation is that the administrator configures a custom logging solution that reads logs from this container. Using container logs means that logs may be dropped if the rate of logs exceeds the container runtime capacity or the custom logging solution capacity. Syslog specifies that logs are sent to a Syslog endpoint. The administrator must specify an endpoint that can receive Syslog messages. The expectation is that the administrator has configured a custom Syslog instance. container describes parameters for the Container logging destination type. Currently there are no parameters for container logging, so this field must be empty. syslog describes parameters for the Syslog logging destination type: address is the IP address of the syslog endpoint that receives log messages. port is the UDP port number of the syslog endpoint that receives log messages. maxLength is the maximum length of the syslog message. It must be between 480 and 4096 bytes. If this field is empty, the maximum length is set to the default value of 1024 bytes. facility specifies the syslog facility of log messages. If this field is empty, the facility is local1 . Otherwise, it must specify a valid syslog facility: kern , user , mail , daemon , auth , syslog , lpr , news , uucp , cron , auth2 , ftp , ntp , audit , alert , cron2 , local0 , local1 , local2 , local3 . local4 , local5 , local6 , or local7 . httpLogFormat specifies the format of the log message for an HTTP request. If this field is empty, log messages use the implementation's default HTTP log format. For HAProxy's default HTTP log format, see the HAProxy documentation . httpHeaders httpHeaders defines the policy for HTTP headers. By setting the forwardedHeaderPolicy for the IngressControllerHTTPHeaders , you specify when and how the Ingress Controller sets the Forwarded , X-Forwarded-For , X-Forwarded-Host , X-Forwarded-Port , X-Forwarded-Proto , and X-Forwarded-Proto-Version HTTP headers. By default, the policy is set to Append . Append specifies that the Ingress Controller appends the headers, preserving any existing headers. Replace specifies that the Ingress Controller sets the headers, removing any existing headers. IfNone specifies that the Ingress Controller sets the headers if they are not already set. Never specifies that the Ingress Controller never sets the headers, preserving any existing headers. By setting headerNameCaseAdjustments , you can specify case adjustments that can be applied to HTTP header names. Each adjustment is specified as an HTTP header name with the desired capitalization. For example, specifying X-Forwarded-For indicates that the x-forwarded-for HTTP header should be adjusted to have the specified capitalization. These adjustments are only applied to cleartext, edge-terminated, and re-encrypt routes, and only when using HTTP/1. For request headers, these adjustments are applied only for routes that have the haproxy.router.openshift.io/h1-adjust-case=true annotation. For response headers, these adjustments are applied to all HTTP responses. If this field is empty, no request headers are adjusted. httpCompression httpCompression defines the policy for HTTP traffic compression. mimeTypes defines a list of MIME types to which compression should be applied. For example, text/css; charset=utf-8 , text/html , text/* , image/svg+xml , application/octet-stream , X-custom/customsub , using the format pattern, type/subtype; [;attribute=value] . The types are: application, image, message, multipart, text, video, or a custom type prefaced by X- ; e.g. To see the full notation for MIME types and subtypes, see RFC1341 httpErrorCodePages httpErrorCodePages specifies custom HTTP error code response pages. By default, an IngressController uses error pages built into the IngressController image. httpCaptureCookies httpCaptureCookies specifies HTTP cookies that you want to capture in access logs. If the httpCaptureCookies field is empty, the access logs do not capture the cookies. For any cookie that you want to capture, the following parameters must be in your IngressController configuration: name specifies the name of the cookie. maxLength specifies tha maximum length of the cookie. matchType specifies if the field name of the cookie exactly matches the capture cookie setting or is a prefix of the capture cookie setting. The matchType field uses the Exact and Prefix parameters. For example: httpCaptureCookies: - matchType: Exact maxLength: 128 name: MYCOOKIE httpCaptureHeaders httpCaptureHeaders specifies the HTTP headers that you want to capture in the access logs. If the httpCaptureHeaders field is empty, the access logs do not capture the headers. httpCaptureHeaders contains two lists of headers to capture in the access logs. The two lists of header fields are request and response . In both lists, the name field must specify the header name and the maxlength field must specify the maximum length of the header. For example: httpCaptureHeaders: request: - maxLength: 256 name: Connection - maxLength: 128 name: User-Agent response: - maxLength: 256 name: Content-Type - maxLength: 256 name: Content-Length tuningOptions tuningOptions specifies options for tuning the performance of Ingress Controller pods. headerBufferBytes specifies how much memory is reserved, in bytes, for Ingress Controller connection sessions. This value must be at least 16384 if HTTP/2 is enabled for the Ingress Controller. If not set, the default value is 32768 bytes. Setting this field not recommended because headerBufferBytes values that are too small can break the Ingress Controller, and headerBufferBytes values that are too large could cause the Ingress Controller to use significantly more memory than necessary. headerBufferMaxRewriteBytes specifies how much memory should be reserved, in bytes, from headerBufferBytes for HTTP header rewriting and appending for Ingress Controller connection sessions. The minimum value for headerBufferMaxRewriteBytes is 4096 . headerBufferBytes must be greater than headerBufferMaxRewriteBytes for incoming HTTP requests. If not set, the default value is 8192 bytes. Setting this field not recommended because headerBufferMaxRewriteBytes values that are too small can break the Ingress Controller and headerBufferMaxRewriteBytes values that are too large could cause the Ingress Controller to use significantly more memory than necessary. threadCount specifies the number of threads to create per HAProxy process. Creating more threads allows each Ingress Controller pod to handle more connections, at the cost of more system resources being used. HAProxy supports up to 64 threads. If this field is empty, the Ingress Controller uses the default value of 4 threads. The default value can change in future releases. Setting this field is not recommended because increasing the number of HAProxy threads allows Ingress Controller pods to use more CPU time under load, and prevent other pods from receiving the CPU resources they need to perform. Reducing the number of threads can cause the Ingress Controller to perform poorly. clientTimeout specifies how long a connection is held open while waiting for a client response. If unset, the default timeout is 30s . serverFinTimeout specifies how long a connection is held open while waiting for the server response to the client that is closing the connection. If unset, the default timeout is 1s . serverTimeout specifies how long a connection is held open while waiting for a server response. If unset, the default timeout is 30s . clientFinTimeout specifies how long a connection is held open while waiting for the client response to the server closing the connection. If unset, the default timeout is 1s . tlsInspectDelay specifies how long the router can hold data to find a matching route. Setting this value too short can cause the router to fall back to the default certificate for edge-terminated, reencrypted, or passthrough routes, even when using a better matched certificate. If unset, the default inspect delay is 5s . tunnelTimeout specifies how long a tunnel connection, including websockets, remains open while the tunnel is idle. If unset, the default timeout is 1h . logEmptyRequests logEmptyRequests specifies connections for which no request is received and logged. These empty requests come from load balancer health probes or web browser speculative connections (preconnect) and logging these requests can be undesirable. However, these requests can be caused by network errors, in which case logging empty requests can be useful for diagnosing the errors. These requests can be caused by port scans, and logging empty requests can aid in detecting intrusion attempts. Allowed values for this field are Log and Ignore . The default value is Log . The LoggingPolicy type accepts either one of two values: Log : Setting this value to Log indicates that an event should be logged. Ignore : Setting this value to Ignore sets the dontlognull option in the HAproxy configuration. HTTPEmptyRequestsPolicy HTTPEmptyRequestsPolicy describes how HTTP connections are handled if the connection times out before a request is received. Allowed values for this field are Respond and Ignore . The default value is Respond . The HTTPEmptyRequestsPolicy type accepts either one of two values: Respond : If the field is set to Respond , the Ingress Controller sends an HTTP 400 or 408 response, logs the connection if access logging is enabled, and counts the connection in the appropriate metrics. Ignore : Setting this option to Ignore adds the http-ignore-probes parameter in the HAproxy configuration. If the field is set to Ignore , the Ingress Controller closes the connection without sending a response, then logs the connection, or incrementing metrics. These connections come from load balancer health probes or web browser speculative connections (preconnect) and can be safely ignored. However, these requests can be caused by network errors, so setting this field to Ignore can impede detection and diagnosis of problems. These requests can be caused by port scans, in which case logging empty requests can aid in detecting intrusion attempts. Note All parameters are optional. 6.3.1. Ingress Controller TLS security profiles TLS security profiles provide a way for servers to regulate which ciphers a connecting client can use when connecting to the server. 6.3.1.1. Understanding TLS security profiles You can use a TLS (Transport Layer Security) security profile to define which TLS ciphers are required by various OpenShift Container Platform components. The OpenShift Container Platform TLS security profiles are based on Mozilla recommended configurations . You can specify one of the following TLS security profiles for each component: Table 6.1. TLS security profiles Profile Description Old This profile is intended for use with legacy clients or libraries. The profile is based on the Old backward compatibility recommended configuration. The Old profile requires a minimum TLS version of 1.0. Note For the Ingress Controller, the minimum TLS version is converted from 1.0 to 1.1. Intermediate This profile is the recommended configuration for the majority of clients. It is the default TLS security profile for the Ingress Controller, kubelet, and control plane. The profile is based on the Intermediate compatibility recommended configuration. The Intermediate profile requires a minimum TLS version of 1.2. Modern This profile is intended for use with modern clients that have no need for backwards compatibility. This profile is based on the Modern compatibility recommended configuration. The Modern profile requires a minimum TLS version of 1.3. Custom This profile allows you to define the TLS version and ciphers to use. Warning Use caution when using a Custom profile, because invalid configurations can cause problems. Note When using one of the predefined profile types, the effective profile configuration is subject to change between releases. For example, given a specification to use the Intermediate profile deployed on release X.Y.Z, an upgrade to release X.Y.Z+1 might cause a new profile configuration to be applied, resulting in a rollout. 6.3.1.2. Configuring the TLS security profile for the Ingress Controller To configure a TLS security profile for an Ingress Controller, edit the IngressController custom resource (CR) to specify a predefined or custom TLS security profile. If a TLS security profile is not configured, the default value is based on the TLS security profile set for the API server. Sample IngressController CR that configures the Old TLS security profile apiVersion: operator.openshift.io/v1 kind: IngressController ... spec: tlsSecurityProfile: old: {} type: Old ... The TLS security profile defines the minimum TLS version and the TLS ciphers for TLS connections for Ingress Controllers. You can see the ciphers and the minimum TLS version of the configured TLS security profile in the IngressController custom resource (CR) under Status.Tls Profile and the configured TLS security profile under Spec.Tls Security Profile . For the Custom TLS security profile, the specific ciphers and minimum TLS version are listed under both parameters. Note The HAProxy Ingress Controller image supports TLS 1.3 and the Modern profile. The Ingress Operator also converts the TLS 1.0 of an Old or Custom profile to 1.1 . Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Edit the IngressController CR in the openshift-ingress-operator project to configure the TLS security profile: USD oc edit IngressController default -n openshift-ingress-operator Add the spec.tlsSecurityProfile field: Sample IngressController CR for a Custom profile apiVersion: operator.openshift.io/v1 kind: IngressController ... spec: tlsSecurityProfile: type: Custom 1 custom: 2 ciphers: 3 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11 ... 1 Specify the TLS security profile type ( Old , Intermediate , or Custom ). The default is Intermediate . 2 Specify the appropriate field for the selected type: old: {} intermediate: {} custom: 3 For the custom type, specify a list of TLS ciphers and minimum accepted TLS version. Save the file to apply the changes. Verification Verify that the profile is set in the IngressController CR: USD oc describe IngressController default -n openshift-ingress-operator Example output Name: default Namespace: openshift-ingress-operator Labels: <none> Annotations: <none> API Version: operator.openshift.io/v1 Kind: IngressController ... Spec: ... Tls Security Profile: Custom: Ciphers: ECDHE-ECDSA-CHACHA20-POLY1305 ECDHE-RSA-CHACHA20-POLY1305 ECDHE-RSA-AES128-GCM-SHA256 ECDHE-ECDSA-AES128-GCM-SHA256 Min TLS Version: VersionTLS11 Type: Custom ... 6.3.1.3. Configuring mutual TLS authentication You can configure the Ingress Controller to enable mutual TLS (mTLS) authentication by setting a spec.clientTLS value. The clientTLS value configures the Ingress Controller to verify client certificates. This configuration includes setting a clientCA value, which is a reference to a config map. The config map contains the PEM-encoded CA certificate bundle that is used to verify a client's certificate. Optionally, you can also configure a list of certificate subject filters. If the clientCA value specifies an X509v3 certificate revocation list (CRL) distribution point, the Ingress Operator downloads and manages a CRL config map based on the HTTP URI X509v3 CRL Distribution Point specified in each provided certificate. The Ingress Controller uses this config map during mTLS/TLS negotiation. Requests that do not provide valid certificates are rejected. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have a PEM-encoded CA certificate bundle. If your CA bundle references a CRL distribution point, you must have also included the end-entity or leaf certificate to the client CA bundle. This certificate must have included an HTTP URI under CRL Distribution Points , as described in RFC 5280. For example: Issuer: C=US, O=Example Inc, CN=Example Global G2 TLS RSA SHA256 2020 CA1 Subject: SOME SIGNED CERT X509v3 CRL Distribution Points: Full Name: URI:http://crl.example.com/example.crl Procedure In the openshift-config namespace, create a config map from your CA bundle: USD oc create configmap \ router-ca-certs-default \ --from-file=ca-bundle.pem=client-ca.crt \ 1 -n openshift-config 1 The config map data key must be ca-bundle.pem , and the data value must be a CA certificate in PEM format. Edit the IngressController resource in the openshift-ingress-operator project: USD oc edit IngressController default -n openshift-ingress-operator Add the spec.clientTLS field and subfields to configure mutual TLS: Sample IngressController CR for a clientTLS profile that specifies filtering patterns apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: clientTLS: clientCertificatePolicy: Required clientCA: name: router-ca-certs-default allowedSubjectPatterns: - "^/CN=example.com/ST=NC/C=US/O=Security/OU=OpenShiftUSD" 6.4. View the default Ingress Controller The Ingress Operator is a core feature of OpenShift Container Platform and is enabled out of the box. Every new OpenShift Container Platform installation has an ingresscontroller named default. It can be supplemented with additional Ingress Controllers. If the default ingresscontroller is deleted, the Ingress Operator will automatically recreate it within a minute. Procedure View the default Ingress Controller: USD oc describe --namespace=openshift-ingress-operator ingresscontroller/default 6.5. View Ingress Operator status You can view and inspect the status of your Ingress Operator. Procedure View your Ingress Operator status: USD oc describe clusteroperators/ingress 6.6. View Ingress Controller logs You can view your Ingress Controller logs. Procedure View your Ingress Controller logs: USD oc logs --namespace=openshift-ingress-operator deployments/ingress-operator -c <container_name> 6.7. View Ingress Controller status Your can view the status of a particular Ingress Controller. Procedure View the status of an Ingress Controller: USD oc describe --namespace=openshift-ingress-operator ingresscontroller/<name> 6.8. Configuring the Ingress Controller 6.8.1. Setting a custom default certificate As an administrator, you can configure an Ingress Controller to use a custom certificate by creating a Secret resource and editing the IngressController custom resource (CR). Prerequisites You must have a certificate/key pair in PEM-encoded files, where the certificate is signed by a trusted certificate authority or by a private trusted certificate authority that you configured in a custom PKI. Your certificate meets the following requirements: The certificate is valid for the ingress domain. The certificate uses the subjectAltName extension to specify a wildcard domain, such as *.apps.ocp4.example.com . You must have an IngressController CR. You may use the default one: USD oc --namespace openshift-ingress-operator get ingresscontrollers Example output NAME AGE default 10m Note If you have intermediate certificates, they must be included in the tls.crt file of the secret containing a custom default certificate. Order matters when specifying a certificate; list your intermediate certificate(s) after any server certificate(s). Procedure The following assumes that the custom certificate and key pair are in the tls.crt and tls.key files in the current working directory. Substitute the actual path names for tls.crt and tls.key . You also may substitute another name for custom-certs-default when creating the Secret resource and referencing it in the IngressController CR. Note This action will cause the Ingress Controller to be redeployed, using a rolling deployment strategy. Create a Secret resource containing the custom certificate in the openshift-ingress namespace using the tls.crt and tls.key files. USD oc --namespace openshift-ingress create secret tls custom-certs-default --cert=tls.crt --key=tls.key Update the IngressController CR to reference the new certificate secret: USD oc patch --type=merge --namespace openshift-ingress-operator ingresscontrollers/default \ --patch '{"spec":{"defaultCertificate":{"name":"custom-certs-default"}}}' Verify the update was effective: USD echo Q |\ openssl s_client -connect console-openshift-console.apps.<domain>:443 -showcerts 2>/dev/null |\ openssl x509 -noout -subject -issuer -enddate where: <domain> Specifies the base domain name for your cluster. Example output subject=C = US, ST = NC, L = Raleigh, O = RH, OU = OCP4, CN = *.apps.example.com issuer=C = US, ST = NC, L = Raleigh, O = RH, OU = OCP4, CN = example.com notAfter=May 10 08:32:45 2022 GM Tip You can alternatively apply the following YAML to set a custom default certificate: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: defaultCertificate: name: custom-certs-default The certificate secret name should match the value used to update the CR. Once the IngressController CR has been modified, the Ingress Operator updates the Ingress Controller's deployment to use the custom certificate. 6.8.2. Removing a custom default certificate As an administrator, you can remove a custom certificate that you configured an Ingress Controller to use. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). You previously configured a custom default certificate for the Ingress Controller. Procedure To remove the custom certificate and restore the certificate that ships with OpenShift Container Platform, enter the following command: USD oc patch -n openshift-ingress-operator ingresscontrollers/default \ --type json -p USD'- op: remove\n path: /spec/defaultCertificate' There can be a delay while the cluster reconciles the new certificate configuration. Verification To confirm that the original cluster certificate is restored, enter the following command: USD echo Q | \ openssl s_client -connect console-openshift-console.apps.<domain>:443 -showcerts 2>/dev/null | \ openssl x509 -noout -subject -issuer -enddate where: <domain> Specifies the base domain name for your cluster. Example output subject=CN = *.apps.<domain> issuer=CN = ingress-operator@1620633373 notAfter=May 10 10:44:36 2023 GMT 6.8.3. Scaling an Ingress Controller Manually scale an Ingress Controller to meeting routing performance or availability requirements such as the requirement to increase throughput. oc commands are used to scale the IngressController resource. The following procedure provides an example for scaling up the default IngressController . Note Scaling is not an immediate action, as it takes time to create the desired number of replicas. Procedure View the current number of available replicas for the default IngressController : USD oc get -n openshift-ingress-operator ingresscontrollers/default -o jsonpath='{USD.status.availableReplicas}' Example output 2 Scale the default IngressController to the desired number of replicas using the oc patch command. The following example scales the default IngressController to 3 replicas: USD oc patch -n openshift-ingress-operator ingresscontroller/default --patch '{"spec":{"replicas": 3}}' --type=merge Example output ingresscontroller.operator.openshift.io/default patched Verify that the default IngressController scaled to the number of replicas that you specified: USD oc get -n openshift-ingress-operator ingresscontrollers/default -o jsonpath='{USD.status.availableReplicas}' Example output 3 Tip You can alternatively apply the following YAML to scale an Ingress Controller to three replicas: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 3 1 1 If you need a different amount of replicas, change the replicas value. 6.8.4. Configuring Ingress access logging You can configure the Ingress Controller to enable access logs. If you have clusters that do not receive much traffic, then you can log to a sidecar. If you have high traffic clusters, to avoid exceeding the capacity of the logging stack or to integrate with a logging infrastructure outside of OpenShift Container Platform, you can forward logs to a custom syslog endpoint. You can also specify the format for access logs. Container logging is useful to enable access logs on low-traffic clusters when there is no existing Syslog logging infrastructure, or for short-term use while diagnosing problems with the Ingress Controller. Syslog is needed for high-traffic clusters where access logs could exceed the OpenShift Logging stack's capacity, or for environments where any logging solution needs to integrate with an existing Syslog logging infrastructure. The Syslog use-cases can overlap. Prerequisites Log in as a user with cluster-admin privileges. Procedure Configure Ingress access logging to a sidecar. To configure Ingress access logging, you must specify a destination using spec.logging.access.destination . To specify logging to a sidecar container, you must specify Container spec.logging.access.destination.type . The following example is an Ingress Controller definition that logs to a Container destination: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: destination: type: Container When you configure the Ingress Controller to log to a sidecar, the operator creates a container named logs inside the Ingress Controller Pod: USD oc -n openshift-ingress logs deployment.apps/router-default -c logs Example output 2020-05-11T19:11:50.135710+00:00 router-default-57dfc6cd95-bpmk6 router-default-57dfc6cd95-bpmk6 haproxy[108]: 174.19.21.82:39654 [11/May/2020:19:11:50.133] public be_http:hello-openshift:hello-openshift/pod:hello-openshift:hello-openshift:10.128.2.12:8080 0/0/1/0/1 200 142 - - --NI 1/1/0/0/0 0/0 "GET / HTTP/1.1" Configure Ingress access logging to a Syslog endpoint. To configure Ingress access logging, you must specify a destination using spec.logging.access.destination . To specify logging to a Syslog endpoint destination, you must specify Syslog for spec.logging.access.destination.type . If the destination type is Syslog , you must also specify a destination endpoint using spec.logging.access.destination.syslog.endpoint and you can specify a facility using spec.logging.access.destination.syslog.facility . The following example is an Ingress Controller definition that logs to a Syslog destination: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: destination: type: Syslog syslog: address: 1.2.3.4 port: 10514 Note The syslog destination port must be UDP. Configure Ingress access logging with a specific log format. You can specify spec.logging.access.httpLogFormat to customize the log format. The following example is an Ingress Controller definition that logs to a syslog endpoint with IP address 1.2.3.4 and port 10514: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: destination: type: Syslog syslog: address: 1.2.3.4 port: 10514 httpLogFormat: '%ci:%cp [%t] %ft %b/%s %B %bq %HM %HU %HV' Disable Ingress access logging. To disable Ingress access logging, leave spec.logging or spec.logging.access empty: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: null 6.8.5. Setting Ingress Controller thread count A cluster administrator can set the thread count to increase the amount of incoming connections a cluster can handle. You can patch an existing Ingress Controller to increase the amount of threads. Prerequisites The following assumes that you already created an Ingress Controller. Procedure Update the Ingress Controller to increase the number of threads: USD oc -n openshift-ingress-operator patch ingresscontroller/default --type=merge -p '{"spec":{"tuningOptions": {"threadCount": 8}}}' Note If you have a node that is capable of running large amounts of resources, you can configure spec.nodePlacement.nodeSelector with labels that match the capacity of the intended node, and configure spec.tuningOptions.threadCount to an appropriately high value. 6.8.6. Ingress Controller sharding As the primary mechanism for traffic to enter the cluster, the demands on the Ingress Controller, or router, can be significant. As a cluster administrator, you can shard the routes to: Balance Ingress Controllers, or routers, with several routes to speed up responses to changes. Allocate certain routes to have different reliability guarantees than other routes. Allow certain Ingress Controllers to have different policies defined. Allow only specific routes to use additional features. Expose different routes on different addresses so that internal and external users can see different routes, for example. Ingress Controller can use either route labels or namespace labels as a sharding method. 6.8.6.1. Configuring Ingress Controller sharding by using route labels Ingress Controller sharding by using route labels means that the Ingress Controller serves any route in any namespace that is selected by the route selector. Ingress Controller sharding is useful when balancing incoming traffic load among a set of Ingress Controllers and when isolating traffic to a specific Ingress Controller. For example, company A goes to one Ingress Controller and company B to another. Procedure Edit the router-internal.yaml file: # cat router-internal.yaml apiVersion: v1 items: - apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: sharded namespace: openshift-ingress-operator spec: domain: <apps-sharded.basedomain.example.net> 1 nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/worker: "" routeSelector: matchLabels: type: sharded status: {} kind: List metadata: resourceVersion: "" selfLink: "" 1 Specify a domain to be used by the Ingress Controller. This domain must be different from the default Ingress Controller domain. Apply the Ingress Controller router-internal.yaml file: # oc apply -f router-internal.yaml The Ingress Controller selects routes in any namespace that have the label type: sharded . Create a new route using the domain configured in the router-internal.yaml : USD oc expose svc <service-name> --hostname <route-name>.apps-sharded.basedomain.example.net 6.8.6.2. Configuring Ingress Controller sharding by using namespace labels Ingress Controller sharding by using namespace labels means that the Ingress Controller serves any route in any namespace that is selected by the namespace selector. Ingress Controller sharding is useful when balancing incoming traffic load among a set of Ingress Controllers and when isolating traffic to a specific Ingress Controller. For example, company A goes to one Ingress Controller and company B to another. Warning If you deploy the Keepalived Ingress VIP, do not deploy a non-default Ingress Controller with value HostNetwork for the endpointPublishingStrategy parameter. Doing so might cause issues. Use value NodePort instead of HostNetwork for endpointPublishingStrategy . Procedure Edit the router-internal.yaml file: # cat router-internal.yaml Example output apiVersion: v1 items: - apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: sharded namespace: openshift-ingress-operator spec: domain: <apps-sharded.basedomain.example.net> 1 nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/worker: "" namespaceSelector: matchLabels: type: sharded status: {} kind: List metadata: resourceVersion: "" selfLink: "" 1 Specify a domain to be used by the Ingress Controller. This domain must be different from the default Ingress Controller domain. Apply the Ingress Controller router-internal.yaml file: # oc apply -f router-internal.yaml The Ingress Controller selects routes in any namespace that is selected by the namespace selector that have the label type: sharded . Create a new route using the domain configured in the router-internal.yaml : USD oc expose svc <service-name> --hostname <route-name>.apps-sharded.basedomain.example.net 6.8.7. Configuring an Ingress Controller to use an internal load balancer When creating an Ingress Controller on cloud platforms, the Ingress Controller is published by a public cloud load balancer by default. As an administrator, you can create an Ingress Controller that uses an internal cloud load balancer. Warning If your cloud provider is Microsoft Azure, you must have at least one public load balancer that points to your nodes. If you do not, all of your nodes will lose egress connectivity to the internet. Important If you want to change the scope for an IngressController , you can change the .spec.endpointPublishingStrategy.loadBalancer.scope parameter after the custom resource (CR) is created. Figure 6.1. Diagram of LoadBalancer The preceding graphic shows the following concepts pertaining to OpenShift Container Platform Ingress LoadBalancerService endpoint publishing strategy: You can load balance externally, using the cloud provider load balancer, or internally, using the OpenShift Ingress Controller Load Balancer. You can use the single IP address of the load balancer and more familiar ports, such as 8080 and 4200 as shown on the cluster depicted in the graphic. Traffic from the external load balancer is directed at the pods, and managed by the load balancer, as depicted in the instance of a down node. See the Kubernetes Services documentation for implementation details. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create an IngressController custom resource (CR) in a file named <name>-ingress-controller.yaml , such as in the following example: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: namespace: openshift-ingress-operator name: <name> 1 spec: domain: <domain> 2 endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: Internal 3 1 Replace <name> with a name for the IngressController object. 2 Specify the domain for the application published by the controller. 3 Specify a value of Internal to use an internal load balancer. Create the Ingress Controller defined in the step by running the following command: USD oc create -f <name>-ingress-controller.yaml 1 1 Replace <name> with the name of the IngressController object. Optional: Confirm that the Ingress Controller was created by running the following command: USD oc --all-namespaces=true get ingresscontrollers 6.8.8. Configuring global access for an Ingress Controller on GCP An Ingress Controller created on GCP with an internal load balancer generates an internal IP address for the service. A cluster administrator can specify the global access option, which enables clients in any region within the same VPC network and compute region as the load balancer, to reach the workloads running on your cluster. For more information, see the GCP documentation for global access . Prerequisites You deployed an OpenShift Container Platform cluster on GCP infrastructure. You configured an Ingress Controller to use an internal load balancer. You installed the OpenShift CLI ( oc ). Procedure Configure the Ingress Controller resource to allow global access. Note You can also create an Ingress Controller and specify the global access option. Configure the Ingress Controller resource: USD oc -n openshift-ingress-operator edit ingresscontroller/default Edit the YAML file: Sample clientAccess configuration to Global spec: endpointPublishingStrategy: loadBalancer: providerParameters: gcp: clientAccess: Global 1 type: GCP scope: Internal type: LoadBalancerService 1 Set gcp.clientAccess to Global . Save the file to apply the changes. Run the following command to verify that the service allows global access: USD oc -n openshift-ingress edit svc/router-default -o yaml The output shows that global access is enabled for GCP with the annotation, networking.gke.io/internal-load-balancer-allow-global-access . 6.8.9. Configuring the default Ingress Controller for your cluster to be internal You can configure the default Ingress Controller for your cluster to be internal by deleting and recreating it. Warning If your cloud provider is Microsoft Azure, you must have at least one public load balancer that points to your nodes. If you do not, all of your nodes will lose egress connectivity to the internet. Important If you want to change the scope for an IngressController , you can change the .spec.endpointPublishingStrategy.loadBalancer.scope parameter after the custom resource (CR) is created. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Configure the default Ingress Controller for your cluster to be internal by deleting and recreating it. USD oc replace --force --wait --filename - <<EOF apiVersion: operator.openshift.io/v1 kind: IngressController metadata: namespace: openshift-ingress-operator name: default spec: endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: Internal EOF 6.8.10. Configuring the route admission policy Administrators and application developers can run applications in multiple namespaces with the same domain name. This is for organizations where multiple teams develop microservices that are exposed on the same hostname. Warning Allowing claims across namespaces should only be enabled for clusters with trust between namespaces, otherwise a malicious user could take over a hostname. For this reason, the default admission policy disallows hostname claims across namespaces. Prerequisites Cluster administrator privileges. Procedure Edit the .spec.routeAdmission field of the ingresscontroller resource variable using the following command: USD oc -n openshift-ingress-operator patch ingresscontroller/default --patch '{"spec":{"routeAdmission":{"namespaceOwnership":"InterNamespaceAllowed"}}}' --type=merge Sample Ingress Controller configuration spec: routeAdmission: namespaceOwnership: InterNamespaceAllowed ... Tip You can alternatively apply the following YAML to configure the route admission policy: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: routeAdmission: namespaceOwnership: InterNamespaceAllowed 6.8.11. Using wildcard routes The HAProxy Ingress Controller has support for wildcard routes. The Ingress Operator uses wildcardPolicy to configure the ROUTER_ALLOW_WILDCARD_ROUTES environment variable of the Ingress Controller. The default behavior of the Ingress Controller is to admit routes with a wildcard policy of None , which is backwards compatible with existing IngressController resources. Procedure Configure the wildcard policy. Use the following command to edit the IngressController resource: USD oc edit IngressController Under spec , set the wildcardPolicy field to WildcardsDisallowed or WildcardsAllowed : spec: routeAdmission: wildcardPolicy: WildcardsDisallowed # or WildcardsAllowed 6.8.12. Using X-Forwarded headers You configure the HAProxy Ingress Controller to specify a policy for how to handle HTTP headers including Forwarded and X-Forwarded-For . The Ingress Operator uses the HTTPHeaders field to configure the ROUTER_SET_FORWARDED_HEADERS environment variable of the Ingress Controller. Procedure Configure the HTTPHeaders field for the Ingress Controller. Use the following command to edit the IngressController resource: USD oc edit IngressController Under spec , set the HTTPHeaders policy field to Append , Replace , IfNone , or Never : apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: httpHeaders: forwardedHeaderPolicy: Append Example use cases As a cluster administrator, you can: Configure an external proxy that injects the X-Forwarded-For header into each request before forwarding it to an Ingress Controller. To configure the Ingress Controller to pass the header through unmodified, you specify the never policy. The Ingress Controller then never sets the headers, and applications receive only the headers that the external proxy provides. Configure the Ingress Controller to pass the X-Forwarded-For header that your external proxy sets on external cluster requests through unmodified. To configure the Ingress Controller to set the X-Forwarded-For header on internal cluster requests, which do not go through the external proxy, specify the if-none policy. If an HTTP request already has the header set through the external proxy, then the Ingress Controller preserves it. If the header is absent because the request did not come through the proxy, then the Ingress Controller adds the header. As an application developer, you can: Configure an application-specific external proxy that injects the X-Forwarded-For header. To configure an Ingress Controller to pass the header through unmodified for an application's Route, without affecting the policy for other Routes, add an annotation haproxy.router.openshift.io/set-forwarded-headers: if-none or haproxy.router.openshift.io/set-forwarded-headers: never on the Route for the application. Note You can set the haproxy.router.openshift.io/set-forwarded-headers annotation on a per route basis, independent from the globally set value for the Ingress Controller. 6.8.13. Enabling HTTP/2 Ingress connectivity You can enable transparent end-to-end HTTP/2 connectivity in HAProxy. It allows application owners to make use of HTTP/2 protocol capabilities, including single connection, header compression, binary streams, and more. You can enable HTTP/2 connectivity for an individual Ingress Controller or for the entire cluster. To enable the use of HTTP/2 for the connection from the client to HAProxy, a route must specify a custom certificate. A route that uses the default certificate cannot use HTTP/2. This restriction is necessary to avoid problems from connection coalescing, where the client re-uses a connection for different routes that use the same certificate. The connection from HAProxy to the application pod can use HTTP/2 only for re-encrypt routes and not for edge-terminated or insecure routes. This restriction is because HAProxy uses Application-Level Protocol Negotiation (ALPN), which is a TLS extension, to negotiate the use of HTTP/2 with the back-end. The implication is that end-to-end HTTP/2 is possible with passthrough and re-encrypt and not with insecure or edge-terminated routes. Warning Using WebSockets with a re-encrypt route and with HTTP/2 enabled on an Ingress Controller requires WebSocket support over HTTP/2. WebSockets over HTTP/2 is a feature of HAProxy 2.4, which is unsupported in OpenShift Container Platform at this time. Important For non-passthrough routes, the Ingress Controller negotiates its connection to the application independently of the connection from the client. This means a client may connect to the Ingress Controller and negotiate HTTP/1.1, and the Ingress Controller may then connect to the application, negotiate HTTP/2, and forward the request from the client HTTP/1.1 connection using the HTTP/2 connection to the application. This poses a problem if the client subsequently tries to upgrade its connection from HTTP/1.1 to the WebSocket protocol, because the Ingress Controller cannot forward WebSocket to HTTP/2 and cannot upgrade its HTTP/2 connection to WebSocket. Consequently, if you have an application that is intended to accept WebSocket connections, it must not allow negotiating the HTTP/2 protocol or else clients will fail to upgrade to the WebSocket protocol. Procedure Enable HTTP/2 on a single Ingress Controller. To enable HTTP/2 on an Ingress Controller, enter the oc annotate command: USD oc -n openshift-ingress-operator annotate ingresscontrollers/<ingresscontroller_name> ingress.operator.openshift.io/default-enable-http2=true Replace <ingresscontroller_name> with the name of the Ingress Controller to annotate. Enable HTTP/2 on the entire cluster. To enable HTTP/2 for the entire cluster, enter the oc annotate command: USD oc annotate ingresses.config/cluster ingress.operator.openshift.io/default-enable-http2=true Tip You can alternatively apply the following YAML to add the annotation: apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster annotations: ingress.operator.openshift.io/default-enable-http2: "true" 6.8.14. Configuring the PROXY protocol for an Ingress Controller A cluster administrator can configure the PROXY protocol when an Ingress Controller uses either the HostNetwork or NodePortService endpoint publishing strategy types. The PROXY protocol enables the load balancer to preserve the original client addresses for connections that the Ingress Controller receives. The original client addresses are useful for logging, filtering, and injecting HTTP headers. In the default configuration, the connections that the Ingress Controller receives only contain the source address that is associated with the load balancer. This feature is not supported in cloud deployments. This restriction is because when OpenShift Container Platform runs in a cloud platform, and an IngressController specifies that a service load balancer should be used, the Ingress Operator configures the load balancer service and enables the PROXY protocol based on the platform requirement for preserving source addresses. Important You must configure both OpenShift Container Platform and the external load balancer to either use the PROXY protocol or to use TCP. Warning The PROXY protocol is unsupported for the default Ingress Controller with installer-provisioned clusters on non-cloud platforms that use a Keepalived Ingress VIP. Prerequisites You created an Ingress Controller. Procedure Edit the Ingress Controller resource: USD oc -n openshift-ingress-operator edit ingresscontroller/default Set the PROXY configuration: If your Ingress Controller uses the hostNetwork endpoint publishing strategy type, set the spec.endpointPublishingStrategy.hostNetwork.protocol subfield to PROXY : Sample hostNetwork configuration to PROXY spec: endpointPublishingStrategy: hostNetwork: protocol: PROXY type: HostNetwork If your Ingress Controller uses the NodePortService endpoint publishing strategy type, set the spec.endpointPublishingStrategy.nodePort.protocol subfield to PROXY : Sample nodePort configuration to PROXY spec: endpointPublishingStrategy: nodePort: protocol: PROXY type: NodePortService 6.8.15. Specifying an alternative cluster domain using the appsDomain option As a cluster administrator, you can specify an alternative to the default cluster domain for user-created routes by configuring the appsDomain field. The appsDomain field is an optional domain for OpenShift Container Platform to use instead of the default, which is specified in the domain field. If you specify an alternative domain, it overrides the default cluster domain for the purpose of determining the default host for a new route. For example, you can use the DNS domain for your company as the default domain for routes and ingresses for applications running on your cluster. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc command line interface. Procedure Configure the appsDomain field by specifying an alternative default domain for user-created routes. Edit the ingress cluster resource: USD oc edit ingresses.config/cluster -o yaml Edit the YAML file: Sample appsDomain configuration to test.example.com apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: domain: apps.example.com 1 appsDomain: <test.example.com> 2 1 Specifies the default domain. You cannot modify the default domain after installation. 2 Optional: Domain for OpenShift Container Platform infrastructure to use for application routes. Instead of the default prefix, apps , you can use an alternative prefix like test . Verify that an existing route contains the domain name specified in the appsDomain field by exposing the route and verifying the route domain change: Note Wait for the openshift-apiserver finish rolling updates before exposing the route. Expose the route: USD oc expose service hello-openshift route.route.openshift.io/hello-openshift exposed Example output: USD oc get routes NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD hello-openshift hello_openshift-<my_project>.test.example.com hello-openshift 8080-tcp None 6.8.16. Converting HTTP header case HAProxy 2.2 lowercases HTTP header names by default, for example, changing Host: xyz.com to host: xyz.com . If legacy applications are sensitive to the capitalization of HTTP header names, use the Ingress Controller spec.httpHeaders.headerNameCaseAdjustments API field for a solution to accommodate legacy applications until they can be fixed. Important Because OpenShift Container Platform 4.10 includes HAProxy 2.2, make sure to add the necessary configuration by using spec.httpHeaders.headerNameCaseAdjustments before upgrading. Prerequisites You have installed the OpenShift CLI ( oc ). You have access to the cluster as a user with the cluster-admin role. Procedure As a cluster administrator, you can convert the HTTP header case by entering the oc patch command or by setting the HeaderNameCaseAdjustments field in the Ingress Controller YAML file. Specify an HTTP header to be capitalized by entering the oc patch command. Enter the oc patch command to change the HTTP host header to Host : USD oc -n openshift-ingress-operator patch ingresscontrollers/default --type=merge --patch='{"spec":{"httpHeaders":{"headerNameCaseAdjustments":["Host"]}}}' Annotate the route of the application: USD oc annotate routes/my-application haproxy.router.openshift.io/h1-adjust-case=true The Ingress Controller then adjusts the host request header as specified. Specify adjustments using the HeaderNameCaseAdjustments field by configuring the Ingress Controller YAML file. The following example Ingress Controller YAML adjusts the host header to Host for HTTP/1 requests to appropriately annotated routes: Example Ingress Controller YAML apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: httpHeaders: headerNameCaseAdjustments: - Host The following example route enables HTTP response header name case adjustments using the haproxy.router.openshift.io/h1-adjust-case annotation: Example route YAML apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/h1-adjust-case: true 1 name: my-application namespace: my-application spec: to: kind: Service name: my-application 1 Set haproxy.router.openshift.io/h1-adjust-case to true. 6.8.17. Using router compression You configure the HAProxy Ingress Controller to specify router compression globally for specific MIME types. You can use the mimeTypes variable to define the formats of MIME types to which compression is applied. The types are: application, image, message, multipart, text, video, or a custom type prefaced by "X-". To see the full notation for MIME types and subtypes, see RFC1341 . Note Memory allocated for compression can affect the max connections. Additionally, compression of large buffers can cause latency, like heavy regex or long lists of regex. Not all MIME types benefit from compression, but HAProxy still uses resources to try to compress if instructed to. Generally, text formats, such as html, css, and js, formats benefit from compression, but formats that are already compressed, such as image, audio, and video, benefit little in exchange for the time and resources spent on compression. Procedure Configure the httpCompression field for the Ingress Controller. Use the following command to edit the IngressController resource: USD oc edit -n openshift-ingress-operator ingresscontrollers/default Under spec , set the httpCompression policy field to mimeTypes and specify a list of MIME types that should have compression applied: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: httpCompression: mimeTypes: - "text/html" - "text/css; charset=utf-8" - "application/json" ... 6.8.18. Exposing router metrics You can expose the HAProxy router metrics by default in Prometheus format on the default stats port, 1936. The external metrics collection and aggregation systems such as Prometheus can access the HAProxy router metrics. You can view the HAProxy router metrics in a browser in the HTML and comma separated values (CSV) format. Prerequisites You configured your firewall to access the default stats port, 1936. Procedure Get the router pod name by running the following command: USD oc get pods -n openshift-ingress Example output NAME READY STATUS RESTARTS AGE router-default-76bfffb66c-46qwp 1/1 Running 0 11h Get the router's username and password, which the router pod stores in the /var/lib/haproxy/conf/metrics-auth/statsUsername and /var/lib/haproxy/conf/metrics-auth/statsPassword files: Get the username by running the following command: USD oc rsh <router_pod_name> cat metrics-auth/statsUsername Get the password by running the following command: USD oc rsh <router_pod_name> cat metrics-auth/statsPassword Get the router IP and metrics certificates by running the following command: USD oc describe pod <router_pod> Get the raw statistics in Prometheus format by running the following command: USD curl -u <user>:<password> http://<router_IP>:<stats_port>/metrics Access the metrics securely by running the following command: USD curl -u user:password https://<router_IP>:<stats_port>/metrics -k Access the default stats port, 1936, by running the following command: USD curl -u <user>:<password> http://<router_IP>:<stats_port>/metrics Example 6.1. Example output ... # HELP haproxy_backend_connections_total Total number of connections. # TYPE haproxy_backend_connections_total gauge haproxy_backend_connections_total{backend="http",namespace="default",route="hello-route"} 0 haproxy_backend_connections_total{backend="http",namespace="default",route="hello-route-alt"} 0 haproxy_backend_connections_total{backend="http",namespace="default",route="hello-route01"} 0 ... # HELP haproxy_exporter_server_threshold Number of servers tracked and the current threshold value. # TYPE haproxy_exporter_server_threshold gauge haproxy_exporter_server_threshold{type="current"} 11 haproxy_exporter_server_threshold{type="limit"} 500 ... # HELP haproxy_frontend_bytes_in_total Current total of incoming bytes. # TYPE haproxy_frontend_bytes_in_total gauge haproxy_frontend_bytes_in_total{frontend="fe_no_sni"} 0 haproxy_frontend_bytes_in_total{frontend="fe_sni"} 0 haproxy_frontend_bytes_in_total{frontend="public"} 119070 ... # HELP haproxy_server_bytes_in_total Current total of incoming bytes. # TYPE haproxy_server_bytes_in_total gauge haproxy_server_bytes_in_total{namespace="",pod="",route="",server="fe_no_sni",service=""} 0 haproxy_server_bytes_in_total{namespace="",pod="",route="",server="fe_sni",service=""} 0 haproxy_server_bytes_in_total{namespace="default",pod="docker-registry-5-nk5fz",route="docker-registry",server="10.130.0.89:5000",service="docker-registry"} 0 haproxy_server_bytes_in_total{namespace="default",pod="hello-rc-vkjqx",route="hello-route",server="10.130.0.90:8080",service="hello-svc-1"} 0 ... Launch the stats window by entering the following URL in a browser: http://<user>:<password>@<router_IP>:<stats_port> Optional: Get the stats in CSV format by entering the following URL in a browser: http://<user>:<password>@<router_ip>:1936/metrics;csv 6.8.19. Customizing HAProxy error code response pages As a cluster administrator, you can specify a custom error code response page for either 503, 404, or both error pages. The HAProxy router serves a 503 error page when the application pod is not running or a 404 error page when the requested URL does not exist. For example, if you customize the 503 error code response page, then the page is served when the application pod is not running, and the default 404 error code HTTP response page is served by the HAProxy router for an incorrect route or a non-existing route. Custom error code response pages are specified in a config map then patched to the Ingress Controller. The config map keys have two available file names as follows: error-page-503.http and error-page-404.http . Custom HTTP error code response pages must follow the HAProxy HTTP error page configuration guidelines . Here is an example of the default OpenShift Container Platform HAProxy router http 503 error code response page . You can use the default content as a template for creating your own custom page. By default, the HAProxy router serves only a 503 error page when the application is not running or when the route is incorrect or non-existent. This default behavior is the same as the behavior on OpenShift Container Platform 4.8 and earlier. If a config map for the customization of an HTTP error code response is not provided, and you are using a custom HTTP error code response page, the router serves a default 404 or 503 error code response page. Note If you use the OpenShift Container Platform default 503 error code page as a template for your customizations, the headers in the file require an editor that can use CRLF line endings. Procedure Create a config map named my-custom-error-code-pages in the openshift-config namespace: USD oc -n openshift-config create configmap my-custom-error-code-pages \ --from-file=error-page-503.http \ --from-file=error-page-404.http Important If you do not specify the correct format for the custom error code response page, a router pod outage occurs. To resolve this outage, you must delete or correct the config map and delete the affected router pods so they can be recreated with the correct information. Patch the Ingress Controller to reference the my-custom-error-code-pages config map by name: USD oc patch -n openshift-ingress-operator ingresscontroller/default --patch '{"spec":{"httpErrorCodePages":{"name":"my-custom-error-code-pages"}}}' --type=merge The Ingress Operator copies the my-custom-error-code-pages config map from the openshift-config namespace to the openshift-ingress namespace. The Operator names the config map according to the pattern, <your_ingresscontroller_name>-errorpages , in the openshift-ingress namespace. Display the copy: USD oc get cm default-errorpages -n openshift-ingress Example output 1 The example config map name is default-errorpages because the default Ingress Controller custom resource (CR) was patched. Confirm that the config map containing the custom error response page mounts on the router volume where the config map key is the filename that has the custom HTTP error code response: For 503 custom HTTP custom error code response: USD oc -n openshift-ingress rsh <router_pod> cat /var/lib/haproxy/conf/error_code_pages/error-page-503.http For 404 custom HTTP custom error code response: USD oc -n openshift-ingress rsh <router_pod> cat /var/lib/haproxy/conf/error_code_pages/error-page-404.http Verification Verify your custom error code HTTP response: Create a test project and application: USD oc new-project test-ingress USD oc new-app django-psql-example For 503 custom http error code response: Stop all the pods for the application. Run the following curl command or visit the route hostname in the browser: USD curl -vk <route_hostname> For 404 custom http error code response: Visit a non-existent route or an incorrect route. Run the following curl command or visit the route hostname in the browser: USD curl -vk <route_hostname> Check if the errorfile attribute is properly in the haproxy.config file: USD oc -n openshift-ingress rsh <router> cat /var/lib/haproxy/conf/haproxy.config | grep errorfile 6.9. Additional resources Configuring a custom PKI
|
[
"apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: domain: apps.openshiftdemos.com",
"nodePlacement: nodeSelector: matchLabels: kubernetes.io/os: linux tolerations: - effect: NoSchedule operator: Exists",
"httpCaptureCookies: - matchType: Exact maxLength: 128 name: MYCOOKIE",
"httpCaptureHeaders: request: - maxLength: 256 name: Connection - maxLength: 128 name: User-Agent response: - maxLength: 256 name: Content-Type - maxLength: 256 name: Content-Length",
"apiVersion: operator.openshift.io/v1 kind: IngressController spec: tlsSecurityProfile: old: {} type: Old",
"oc edit IngressController default -n openshift-ingress-operator",
"apiVersion: operator.openshift.io/v1 kind: IngressController spec: tlsSecurityProfile: type: Custom 1 custom: 2 ciphers: 3 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11",
"oc describe IngressController default -n openshift-ingress-operator",
"Name: default Namespace: openshift-ingress-operator Labels: <none> Annotations: <none> API Version: operator.openshift.io/v1 Kind: IngressController Spec: Tls Security Profile: Custom: Ciphers: ECDHE-ECDSA-CHACHA20-POLY1305 ECDHE-RSA-CHACHA20-POLY1305 ECDHE-RSA-AES128-GCM-SHA256 ECDHE-ECDSA-AES128-GCM-SHA256 Min TLS Version: VersionTLS11 Type: Custom",
"Issuer: C=US, O=Example Inc, CN=Example Global G2 TLS RSA SHA256 2020 CA1 Subject: SOME SIGNED CERT X509v3 CRL Distribution Points: Full Name: URI:http://crl.example.com/example.crl",
"oc create configmap router-ca-certs-default --from-file=ca-bundle.pem=client-ca.crt \\ 1 -n openshift-config",
"oc edit IngressController default -n openshift-ingress-operator",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: clientTLS: clientCertificatePolicy: Required clientCA: name: router-ca-certs-default allowedSubjectPatterns: - \"^/CN=example.com/ST=NC/C=US/O=Security/OU=OpenShiftUSD\"",
"oc describe --namespace=openshift-ingress-operator ingresscontroller/default",
"oc describe clusteroperators/ingress",
"oc logs --namespace=openshift-ingress-operator deployments/ingress-operator -c <container_name>",
"oc describe --namespace=openshift-ingress-operator ingresscontroller/<name>",
"oc --namespace openshift-ingress-operator get ingresscontrollers",
"NAME AGE default 10m",
"oc --namespace openshift-ingress create secret tls custom-certs-default --cert=tls.crt --key=tls.key",
"oc patch --type=merge --namespace openshift-ingress-operator ingresscontrollers/default --patch '{\"spec\":{\"defaultCertificate\":{\"name\":\"custom-certs-default\"}}}'",
"echo Q | openssl s_client -connect console-openshift-console.apps.<domain>:443 -showcerts 2>/dev/null | openssl x509 -noout -subject -issuer -enddate",
"subject=C = US, ST = NC, L = Raleigh, O = RH, OU = OCP4, CN = *.apps.example.com issuer=C = US, ST = NC, L = Raleigh, O = RH, OU = OCP4, CN = example.com notAfter=May 10 08:32:45 2022 GM",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: defaultCertificate: name: custom-certs-default",
"oc patch -n openshift-ingress-operator ingresscontrollers/default --type json -p USD'- op: remove\\n path: /spec/defaultCertificate'",
"echo Q | openssl s_client -connect console-openshift-console.apps.<domain>:443 -showcerts 2>/dev/null | openssl x509 -noout -subject -issuer -enddate",
"subject=CN = *.apps.<domain> issuer=CN = ingress-operator@1620633373 notAfter=May 10 10:44:36 2023 GMT",
"oc get -n openshift-ingress-operator ingresscontrollers/default -o jsonpath='{USD.status.availableReplicas}'",
"2",
"oc patch -n openshift-ingress-operator ingresscontroller/default --patch '{\"spec\":{\"replicas\": 3}}' --type=merge",
"ingresscontroller.operator.openshift.io/default patched",
"oc get -n openshift-ingress-operator ingresscontrollers/default -o jsonpath='{USD.status.availableReplicas}'",
"3",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 3 1",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: destination: type: Container",
"oc -n openshift-ingress logs deployment.apps/router-default -c logs",
"2020-05-11T19:11:50.135710+00:00 router-default-57dfc6cd95-bpmk6 router-default-57dfc6cd95-bpmk6 haproxy[108]: 174.19.21.82:39654 [11/May/2020:19:11:50.133] public be_http:hello-openshift:hello-openshift/pod:hello-openshift:hello-openshift:10.128.2.12:8080 0/0/1/0/1 200 142 - - --NI 1/1/0/0/0 0/0 \"GET / HTTP/1.1\"",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: destination: type: Syslog syslog: address: 1.2.3.4 port: 10514",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: destination: type: Syslog syslog: address: 1.2.3.4 port: 10514 httpLogFormat: '%ci:%cp [%t] %ft %b/%s %B %bq %HM %HU %HV'",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: null",
"oc -n openshift-ingress-operator patch ingresscontroller/default --type=merge -p '{\"spec\":{\"tuningOptions\": {\"threadCount\": 8}}}'",
"cat router-internal.yaml apiVersion: v1 items: - apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: sharded namespace: openshift-ingress-operator spec: domain: <apps-sharded.basedomain.example.net> 1 nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/worker: \"\" routeSelector: matchLabels: type: sharded status: {} kind: List metadata: resourceVersion: \"\" selfLink: \"\"",
"oc apply -f router-internal.yaml",
"oc expose svc <service-name> --hostname <route-name>.apps-sharded.basedomain.example.net",
"cat router-internal.yaml",
"apiVersion: v1 items: - apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: sharded namespace: openshift-ingress-operator spec: domain: <apps-sharded.basedomain.example.net> 1 nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/worker: \"\" namespaceSelector: matchLabels: type: sharded status: {} kind: List metadata: resourceVersion: \"\" selfLink: \"\"",
"oc apply -f router-internal.yaml",
"oc expose svc <service-name> --hostname <route-name>.apps-sharded.basedomain.example.net",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: namespace: openshift-ingress-operator name: <name> 1 spec: domain: <domain> 2 endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: Internal 3",
"oc create -f <name>-ingress-controller.yaml 1",
"oc --all-namespaces=true get ingresscontrollers",
"oc -n openshift-ingress-operator edit ingresscontroller/default",
"spec: endpointPublishingStrategy: loadBalancer: providerParameters: gcp: clientAccess: Global 1 type: GCP scope: Internal type: LoadBalancerService",
"oc -n openshift-ingress edit svc/router-default -o yaml",
"oc replace --force --wait --filename - <<EOF apiVersion: operator.openshift.io/v1 kind: IngressController metadata: namespace: openshift-ingress-operator name: default spec: endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: Internal EOF",
"oc -n openshift-ingress-operator patch ingresscontroller/default --patch '{\"spec\":{\"routeAdmission\":{\"namespaceOwnership\":\"InterNamespaceAllowed\"}}}' --type=merge",
"spec: routeAdmission: namespaceOwnership: InterNamespaceAllowed",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: routeAdmission: namespaceOwnership: InterNamespaceAllowed",
"oc edit IngressController",
"spec: routeAdmission: wildcardPolicy: WildcardsDisallowed # or WildcardsAllowed",
"oc edit IngressController",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: httpHeaders: forwardedHeaderPolicy: Append",
"oc -n openshift-ingress-operator annotate ingresscontrollers/<ingresscontroller_name> ingress.operator.openshift.io/default-enable-http2=true",
"oc annotate ingresses.config/cluster ingress.operator.openshift.io/default-enable-http2=true",
"apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster annotations: ingress.operator.openshift.io/default-enable-http2: \"true\"",
"oc -n openshift-ingress-operator edit ingresscontroller/default",
"spec: endpointPublishingStrategy: hostNetwork: protocol: PROXY type: HostNetwork",
"spec: endpointPublishingStrategy: nodePort: protocol: PROXY type: NodePortService",
"oc edit ingresses.config/cluster -o yaml",
"apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: domain: apps.example.com 1 appsDomain: <test.example.com> 2",
"oc expose service hello-openshift route.route.openshift.io/hello-openshift exposed",
"oc get routes NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD hello-openshift hello_openshift-<my_project>.test.example.com hello-openshift 8080-tcp None",
"oc -n openshift-ingress-operator patch ingresscontrollers/default --type=merge --patch='{\"spec\":{\"httpHeaders\":{\"headerNameCaseAdjustments\":[\"Host\"]}}}'",
"oc annotate routes/my-application haproxy.router.openshift.io/h1-adjust-case=true",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: httpHeaders: headerNameCaseAdjustments: - Host",
"apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/h1-adjust-case: true 1 name: my-application namespace: my-application spec: to: kind: Service name: my-application",
"oc edit -n openshift-ingress-operator ingresscontrollers/default",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: httpCompression: mimeTypes: - \"text/html\" - \"text/css; charset=utf-8\" - \"application/json\"",
"oc get pods -n openshift-ingress",
"NAME READY STATUS RESTARTS AGE router-default-76bfffb66c-46qwp 1/1 Running 0 11h",
"oc rsh <router_pod_name> cat metrics-auth/statsUsername",
"oc rsh <router_pod_name> cat metrics-auth/statsPassword",
"oc describe pod <router_pod>",
"curl -u <user>:<password> http://<router_IP>:<stats_port>/metrics",
"curl -u user:password https://<router_IP>:<stats_port>/metrics -k",
"curl -u <user>:<password> http://<router_IP>:<stats_port>/metrics",
"http://<user>:<password>@<router_IP>:<stats_port>",
"http://<user>:<password>@<router_ip>:1936/metrics;csv",
"oc -n openshift-config create configmap my-custom-error-code-pages --from-file=error-page-503.http --from-file=error-page-404.http",
"oc patch -n openshift-ingress-operator ingresscontroller/default --patch '{\"spec\":{\"httpErrorCodePages\":{\"name\":\"my-custom-error-code-pages\"}}}' --type=merge",
"oc get cm default-errorpages -n openshift-ingress",
"NAME DATA AGE default-errorpages 2 25s 1",
"oc -n openshift-ingress rsh <router_pod> cat /var/lib/haproxy/conf/error_code_pages/error-page-503.http",
"oc -n openshift-ingress rsh <router_pod> cat /var/lib/haproxy/conf/error_code_pages/error-page-404.http",
"oc new-project test-ingress",
"oc new-app django-psql-example",
"curl -vk <route_hostname>",
"curl -vk <route_hostname>",
"oc -n openshift-ingress rsh <router> cat /var/lib/haproxy/conf/haproxy.config | grep errorfile"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/networking/configuring-ingress
|
Managing configurations using Ansible integration
|
Managing configurations using Ansible integration Red Hat Satellite 6.15 Configure Ansible integration in Satellite and use Ansible roles and playbooks to configure your hosts Red Hat Satellite Documentation Team [email protected]
| null |
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/managing_configurations_using_ansible_integration/index
|
2.3. Modifying Control Groups
|
2.3. Modifying Control Groups Each persistent unit supervised by systemd has a unit configuration file in the /usr/lib/systemd/system/ directory. To change parameters of a service unit, modify this configuration file. This can be done either manually or from the command-line interface by using the systemctl set-property command. 2.3.1. Setting Parameters from the Command-Line Interface The systemctl set-property command allows you to persistently change resource control settings during the application runtime. To do so, use the following syntax as root : Replace name with the name of the systemd unit you wish to modify, parameter with a name of the parameter to be changed, and value with a new value you want to assign to this parameter. Not all unit parameters can be changed at runtime, but most of those related to resource control may, see Section 2.3.2, "Modifying Unit Files" for a complete list. Note that systemctl set-property allows you to change multiple properties at once, which is preferable over setting them individually. The changes are applied instantly, and written into the unit file so that they are preserved after reboot. You can change this behavior by passing the --runtime option that makes your settings transient: Example 2.2. Using systemctl set-property To limit the CPU and memory usage of httpd.service from the command line, type: To make this a temporary change, add the --runtime option: 2.3.2. Modifying Unit Files Systemd service unit files provide a number of high-level configuration parameters useful for resource management. These parameters communicate with Linux cgroup controllers, that have to be enabled in the kernel. With these parameters, you can manage CPU, memory consumption, block IO, as well as some more fine-grained unit properties. Managing CPU The cpu controller is enabled by default in the kernel, and consequently every system service receives the same amount of CPU time, regardless of how many processes it contains. This default behavior can be changed with the DefaultControllers parameter in the /etc/systemd/system.conf configuration file. To manage CPU allocation, use the following directive in the [Service] section of the unit configuration file: CPUShares = value Replace value with a number of CPU shares. The default value is 1024. By increasing the number, you assign more CPU time to the unit. Setting the value of the CPUShares parameter automatically turns CPUAccounting on in the unit file. Users can thus monitor the usage of the processor with the systemd-cgtop command. The CPUShares parameter controls the cpu.shares control group parameter. See the description of the cpu controller in Controller-Specific Kernel Documentation to see other CPU-related control parameters. Example 2.3. Limiting CPU Consumption of a Unit To assign the Apache service 1500 CPU shares instead of the default 1024, create a new /etc/systemd/system/httpd.service.d/cpu.conf configuration file with the following content: To apply the changes, reload systemd's configuration and restart Apache so that the modified service file is taken into account: CPUQuota = value Replace value with a value of CPU time quota to assign the specified CPU time quota to the processes executed. The value of the CPUQuota parameter, which is expressed in percentage, specifies how much CPU time the unit gets at maximum, relative to the total CPU time available on one CPU. Values higher than 100% indicate that more than one CPU is used. CPUQuota controls the cpu.max attribute on the unified control group hierarchy, and the legacy cpu.cfs_quota_us attribute. Setting the value of the CPUQuota parameter automatically turns CPUAccounting on in the unit file. Users can thus monitor the usage of the processor with the systemd-cgtop command. Example 2.4. Using CPUQuota Setting CPUQuota to 20% ensures that the executed processes never get more than 20% CPU time on a single CPU. To assign the Apache service CPU quota of 20%, add the following content to the /etc/systemd/system/httpd.service.d/cpu.conf configuration file: To apply the changes, reload systemd's configuration and restart Apache so that the modified service file is taken into account: Managing Memory To enforce limits on the unit's memory consumption, use the following directives in the [Service] section of the unit configuration file: MemoryLimit = value Replace value with a limit on maximum memory usage of the processes executed in the cgroup. Use suffixes K , M , G , or T to identify Kilobyte, Megabyte, Gigabyte, or Terabyte as the unit of measurement. Also, the MemoryAccounting parameter has to be enabled for the unit. The MemoryLimit parameter controls the memory.limit_in_bytes control group parameter. For more information, see the description of the memory controller in Controller-Specific Kernel Documentation . Example 2.5. Limiting Memory Consumption of a Unit To assign a 1GB memory limit to the Apache service, modify the MemoryLimit setting in the /etc/systemd/system/httpd.service.d/cpu.conf unit file: To apply the changes, reload systemd's configuration and restart Apache so that the modified service file is taken into account: Managing Block IO To manage the Block IO, use the following directives in the [Service] section of the unit configuration file. Directives listed below assume that the BlockIOAccounting parameter is enabled: BlockIOWeight = value Replace value with a new overall block IO weight for the executed processes. Choose a single value between 10 and 1000, the default setting is 1000. BlockIODeviceWeight = device_name value Replace value with a block IO weight for a device specified with device_name . Replace device_name either with a name or with a path to a device. As with BlockIOWeight , it is possible to set a single weight value between 10 and 1000. BlockIOReadBandwidth = device_name value This directive allows you to limit a specific bandwidth for a unit. Replace device_name with the name of a device or with a path to a block device node, value stands for a bandwidth rate. Use suffixes K , M , G , or T to specify units of measurement. A value with no suffix is interpreted as bytes per second. BlockIOWriteBandwidth = device_name value Limits the write bandwidth for a specified device. Accepts the same arguments as BlockIOReadBandwidth . Each of the aforementioned directives controls a corresponding cgroup parameter. For other CPU-related control parameters, see the description of the blkio controller in Controller-Specific Kernel Documentation . Note Currently, the blkio resource controller does not support buffered write operations. It is primarily targeted at direct I/O, so the services that use buffered write will ignore the limits set with BlockIOWriteBandwidth . On the other hand, buffered read operations are supported, and BlockIOReadBandwidth limits will be applied correctly both on direct and buffered read. Example 2.6. Limiting Block IO of a Unit To lower the block IO weight for the Apache service accessing the /home/jdoe/ directory, add the following text into the /etc/systemd/system/httpd.service.d/cpu.conf unit file: To set the maximum bandwidth for Apache reading from the /var/log/ directory to 5MB per second, use the following syntax: To apply your changes, reload systemd's configuration and restart Apache so that the modified service file is taken into account: Managing Other System Resources There are several other directives that can be used in the unit file to facilitate resource management: DeviceAllow = device_name options This option controls access to specific device nodes. Here, device_name stands for a path to a device node or a device group name as specified in /proc/devices . Replace options with a combination of r , w , and m to allow the unit to read, write, or create device nodes. DevicePolicy = value Here, value is one of: strict (only allows the types of access explicitly specified with DeviceAllow ), closed (allows access to standard pseudo devices including /dev/null, /dev/zero, /dev/full, /dev/random, and /dev/urandom) or auto (allows access to all devices if no explicit DeviceAllow is present, which is the default behavior) Slice = slice_name Replace slice_name with the name of the slice to place the unit in. The default is system.slice . Scope units cannot be arranged in this way, since they are tied to their parent slices. ExecStartPost = command Currently, systemd supports only a subset of cgroup features. However, as a workaround, you can use the ExecStartPost= option along with setting the memory.memsw.limit_in_bytes parameter in order to prevent any swap usage for a service. For more information on ExecStartPost= , see the systemd.service(5) man page. Example 2.7. Configuring Cgroup Options Imagine that you wish to change the memory.memsw.limit_in_bytes setting to the same value as the unit's MemoryLimit = in order to prevent any swap usage for a given example service. To apply the change, reload systemd configuration and restart the service so that the modified setting is taken into account:
|
[
"~]# systemctl set-property name parameter = value",
"~]# systemctl set-property --runtime name property = value",
"~]# systemctl set-property httpd.service CPUShares=600 MemoryLimit=500M",
"~]# systemctl set-property --runtime httpd.service CPUShares=600 MemoryLimit=500M",
"[Service] CPUShares=1500",
"~]# systemctl daemon-reload ~]# systemctl restart httpd.service",
"[Service] CPUQuota=20%",
"~]# systemctl daemon-reload ~]# systemctl restart httpd.service",
"[Service] MemoryLimit=1G",
"~]# systemctl daemon-reload ~]# systemctl restart httpd.service",
"[Service] BlockIODeviceWeight=/home/jdoe 750",
"[Service] BlockIOReadBandwidth=/var/log 5M",
"~]# systemctl daemon-reload ~]# systemctl restart httpd.service",
"ExecStartPost=/bin/bash -c \"echo 1G > /sys/fs/cgroup/memory/system.slice/ example .service/memory.memsw.limit_in_bytes\"",
"~]# systemctl daemon-reload ~]# systemctl restart example .service"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/resource_management_guide/sec-modifying_control_groups
|
27.2. Displaying the Current PKINIT Configuration
|
27.2. Displaying the Current PKINIT Configuration IdM provides multiple commands you can use to query the PKINIT configuration in your domain. To determine the PKINIT status in your domain, use the ipa pkinit-status command: To determine the PKINIT status on the server where you are logged in, use the ipa-pkinit-manage status command: The commands display the PKINIT configuration status as enabled or disabled : enabled : PKINIT is configured using a certificate signed by the integrated IdM CA or an external PKINIT certificate. See also Section 27.1, "Default PKINIT Status in Different IdM Versions" . disabled : IdM only uses PKINIT for internal purposes on IdM servers. To display the IdM servers with active Kerberos key distribution centers (KDCs) that support PKINIT for IdM clients, use the ipa config-show command on any server: Additional Resources For more details on the command-line tools for reporting the PKINIT status, use the ipa help pkinit command.
|
[
"ipa pkinit-status Server name: server1.example.com PKINIT status: enabled [...output truncated...] Server name: server2.example.com PKINIT status: disabled [...output truncated...]",
"ipa-pkinit-manage status PKINIT is enabled The ipa-pkinit-manage command was successful",
"ipa config-show Maximum username length: 32 Home directory base: /home Default shell: /bin/sh Default users group: ipausers [...output truncated...] IPA masters capable of PKINIT: server1.example.com [...output truncated...]"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/check-pkinit-status
|
Chapter 6. HostFirmwareSettings [metal3.io/v1alpha1]
|
Chapter 6. HostFirmwareSettings [metal3.io/v1alpha1] Description HostFirmwareSettings is the Schema for the hostfirmwaresettings API Type object 6.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object HostFirmwareSettingsSpec defines the desired state of HostFirmwareSettings status object HostFirmwareSettingsStatus defines the observed state of HostFirmwareSettings 6.1.1. .spec Description HostFirmwareSettingsSpec defines the desired state of HostFirmwareSettings Type object Required settings Property Type Description settings integer-or-string Settings are the desired firmware settings stored as name/value pairs. 6.1.2. .status Description HostFirmwareSettingsStatus defines the observed state of HostFirmwareSettings Type object Required settings Property Type Description conditions array Track whether settings stored in the spec are valid based on the schema conditions[] object Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } lastUpdated string Time that the status was last updated schema object FirmwareSchema is a reference to the Schema used to describe each FirmwareSetting. By default, this will be a Schema in the same Namespace as the settings but it can be overwritten in the Spec settings object (string) Settings are the firmware settings stored as name/value pairs 6.1.3. .status.conditions Description Track whether settings stored in the spec are valid based on the schema Type array 6.1.4. .status.conditions[] Description Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } Type object Required lastTransitionTime message reason status type Property Type Description lastTransitionTime string lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string message is a human readable message indicating details about the transition. This may be an empty string. observedGeneration integer observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. reason string reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. status string status of the condition, one of True, False, Unknown. type string type of condition in CamelCase or in foo.example.com/CamelCase. --- Many .condition.type values are consistent across resources like Available, but because arbitrary conditions can be useful (see .node.status.conditions), the ability to deconflict is important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt) 6.1.5. .status.schema Description FirmwareSchema is a reference to the Schema used to describe each FirmwareSetting. By default, this will be a Schema in the same Namespace as the settings but it can be overwritten in the Spec Type object Required name namespace Property Type Description name string name is the reference to the schema. namespace string namespace is the namespace of the where the schema is stored. 6.2. API endpoints The following API endpoints are available: /apis/metal3.io/v1alpha1/hostfirmwaresettings GET : list objects of kind HostFirmwareSettings /apis/metal3.io/v1alpha1/namespaces/{namespace}/hostfirmwaresettings DELETE : delete collection of HostFirmwareSettings GET : list objects of kind HostFirmwareSettings POST : create HostFirmwareSettings /apis/metal3.io/v1alpha1/namespaces/{namespace}/hostfirmwaresettings/{name} DELETE : delete HostFirmwareSettings GET : read the specified HostFirmwareSettings PATCH : partially update the specified HostFirmwareSettings PUT : replace the specified HostFirmwareSettings /apis/metal3.io/v1alpha1/namespaces/{namespace}/hostfirmwaresettings/{name}/status GET : read status of the specified HostFirmwareSettings PATCH : partially update status of the specified HostFirmwareSettings PUT : replace status of the specified HostFirmwareSettings 6.2.1. /apis/metal3.io/v1alpha1/hostfirmwaresettings Table 6.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list objects of kind HostFirmwareSettings Table 6.2. HTTP responses HTTP code Reponse body 200 - OK HostFirmwareSettingsList schema 401 - Unauthorized Empty 6.2.2. /apis/metal3.io/v1alpha1/namespaces/{namespace}/hostfirmwaresettings Table 6.3. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 6.4. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of HostFirmwareSettings Table 6.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 6.6. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind HostFirmwareSettings Table 6.7. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 6.8. HTTP responses HTTP code Reponse body 200 - OK HostFirmwareSettingsList schema 401 - Unauthorized Empty HTTP method POST Description create HostFirmwareSettings Table 6.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.10. Body parameters Parameter Type Description body HostFirmwareSettings schema Table 6.11. HTTP responses HTTP code Reponse body 200 - OK HostFirmwareSettings schema 201 - Created HostFirmwareSettings schema 202 - Accepted HostFirmwareSettings schema 401 - Unauthorized Empty 6.2.3. /apis/metal3.io/v1alpha1/namespaces/{namespace}/hostfirmwaresettings/{name} Table 6.12. Global path parameters Parameter Type Description name string name of the HostFirmwareSettings namespace string object name and auth scope, such as for teams and projects Table 6.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete HostFirmwareSettings Table 6.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 6.15. Body parameters Parameter Type Description body DeleteOptions schema Table 6.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified HostFirmwareSettings Table 6.17. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 6.18. HTTP responses HTTP code Reponse body 200 - OK HostFirmwareSettings schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified HostFirmwareSettings Table 6.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 6.20. Body parameters Parameter Type Description body Patch schema Table 6.21. HTTP responses HTTP code Reponse body 200 - OK HostFirmwareSettings schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified HostFirmwareSettings Table 6.22. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.23. Body parameters Parameter Type Description body HostFirmwareSettings schema Table 6.24. HTTP responses HTTP code Reponse body 200 - OK HostFirmwareSettings schema 201 - Created HostFirmwareSettings schema 401 - Unauthorized Empty 6.2.4. /apis/metal3.io/v1alpha1/namespaces/{namespace}/hostfirmwaresettings/{name}/status Table 6.25. Global path parameters Parameter Type Description name string name of the HostFirmwareSettings namespace string object name and auth scope, such as for teams and projects Table 6.26. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified HostFirmwareSettings Table 6.27. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 6.28. HTTP responses HTTP code Reponse body 200 - OK HostFirmwareSettings schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified HostFirmwareSettings Table 6.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 6.30. Body parameters Parameter Type Description body Patch schema Table 6.31. HTTP responses HTTP code Reponse body 200 - OK HostFirmwareSettings schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified HostFirmwareSettings Table 6.32. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.33. Body parameters Parameter Type Description body HostFirmwareSettings schema Table 6.34. HTTP responses HTTP code Reponse body 200 - OK HostFirmwareSettings schema 201 - Created HostFirmwareSettings schema 401 - Unauthorized Empty
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/provisioning_apis/hostfirmwaresettings-metal3-io-v1alpha1
|
4.3. Configuration File Defaults
|
4.3. Configuration File Defaults The /etc/multipath.conf configuration file includes a defaults section that sets the user_friendly_names parameter to yes , as follows. This overwrites the default value of the user_friendly_names parameter. The configuration file includes a template of configuration defaults. This section is commented out, as follows. To overwrite the default value for any of the configuration parameters, you can copy the relevant line from this template into the defaults section and uncomment it. For example, to overwrite the path_grouping_policy parameter so that it is multibus rather than the default value of failover , copy the appropriate line from the template to the initial defaults section of the configuration file, and uncomment it, as follows. Table 4.1, "Multipath Configuration Defaults" describes the attributes that are set in the defaults section of the multipath.conf configuration file. These values are used by DM Multipath unless they are overwritten by the attributes specified in the devices and multipaths sections of the multipath.conf file. Table 4.1. Multipath Configuration Defaults Attribute Description polling_interval Specifies the interval between two path checks in seconds. For properly functioning paths, the interval between checks will gradually increase to (4 * polling_interval ). The default value is 5. multipath_dir The directory where the dynamic shared objects are stored. The default value is system dependent, commonly /lib/multipath . find_multipaths Defines the mode for setting up multipath devices. If this parameter is set to yes , then multipath will not try to create a device for every path that is not blacklisted. Instead multipath will create a device only if one of three conditions are met: - There are at least two paths that are not blacklisted with the same WWID. - The user manually forces the creation of the device by specifying a device with the multipath command. - A path has the same WWID as a multipath device that was previously created. Whenever a multipath device is created with find_multipaths set, multipath remembers the WWID of the device so that it will automatically create the device again as soon as it sees a path with that WWID. This allows you to have multipath automatically choose the correct paths to make into multipath devices, without having to edit the multipath blacklist. For instructions on the procedure to follow if you have previously created multipath devices when the find_multipaths parameter was not set, see Section 4.2, "Configuration File Blacklist" . The default value is no . The default multipath.conf file created by mpathconf , however, will enable find_multipaths as of Red Hat Enterprise Linux 7. reassign_maps Enable reassigning of device-mapper maps. With this option, The multipathd daemon will remap existing device-mapper maps to always point to the multipath device, not the underlying block devices. Possible values are yes and no . The default value is yes . verbosity The default verbosity. Higher values increase the verbosity level. Valid levels are between 0 and 6. The default value is 2 . path_selector Specifies the default algorithm to use in determining what path to use for the I/O operation. Possible values include: round-robin 0 : Loop through every path in the path group, sending the same amount of I/O to each. queue-length 0 : Send the bunch of I/O down the path with the least number of outstanding I/O requests. service-time 0 : Send the bunch of I/O down the path with the shortest estimated service time, which is determined by dividing the total size of the outstanding I/O to each path by its relative throughput. The default value is service-time 0 . path_grouping_policy Specifies the default path grouping policy to apply to unspecified multipaths. Possible values include: failover : 1 path per priority group. multibus : all valid paths in 1 priority group. group_by_serial : 1 priority group per detected serial number. group_by_prio : 1 priority group per path priority value. Priorities are determined by callout programs specified as global, per-controller, or per-multipath options. group_by_node_name : 1 priority group per target node name. Target node names are fetched in /sys/class/fc_transport/target*/node_name . The default value is failover . prio Specifies the default function to call to obtain a path priority value. For example, the ALUA bits in SPC-3 provide an exploitable prio value. Possible values include: const : Set a priority of 1 to all paths. emc : Generate the path priority for EMC arrays. alua : Generate the path priority based on the SCSI-3 ALUA settings. As of Red Hat Enterprise Linux 7.3, if you specify prio "alua exclusive_pref_bit" in your device configuration, multipath will create a path group that contains only the path with the pref bit set and will give that path group the highest priority. ontap : Generate the path priority for NetApp arrays. rdac : Generate the path priority for LSI/Engenio RDAC controller. hp_sw : Generate the path priority for Compaq/HP controller in active/standby mode. hds : Generate the path priority for Hitachi HDS Modular storage arrays. The default value is const . features The default extra features of multipath devices, using the format: " number_of_features_plus_arguments feature1 ...". Possible values for features include: queue_if_no_path , which is the same as setting no_path_retry to queue . For information on issues that may arise when using this feature, see Section 5.6, "Issues with queue_if_no_path feature" . retain_attached_hw_handler : If this parameter is set to yes and the SCSI layer has already attached a hardware handler to the path device, multipath will not force the device to use the hardware_handler specified by the multipath.conf file. If the SCSI layer has not attached a hardware handler, multipath will continue to use its configured hardware handler as usual. The default value is no . pg_init_retries n : Retry path group initialization up to n times before failing where 1 <= n <= 50. pg_init_delay_msecs n : Wait n milliseconds between path group initialization retries where 0 <= n <= 60000. path_checker Specifies the default method used to determine the state of the paths. Possible values include: readsector0 : Read the first sector of the device. tur : Issue a TEST UNIT READY command to the device. emc_clariion : Query the EMC Clariion specific EVPD page 0xC0 to determine the path. hp_sw : Check the path state for HP storage arrays with Active/Standby firmware. rdac : Check the path state for LSI/Engenio RDAC storage controller. directio : Read the first sector with direct I/O. The default value is directio . failback Manages path group failback. A value of immediate specifies immediate failback to the highest priority path group that contains active paths. A value of manual specifies that there should not be immediate failback but that failback can happen only with operator intervention. A value of followover specifies that automatic failback should be performed when the first path of a path group becomes active. This keeps a node from automatically failing back when another node requested the failover. A numeric value greater than zero specifies deferred failback, expressed in seconds. The default value is manual . rr_min_io Specifies the number of I/O requests to route to a path before switching to the path in the current path group. This setting is only for systems running kernels older than 2.6.31. Newer systems should use rr_min_io_rq . The default value is 1000. rr_min_io_rq Specifies the number of I/O requests to route to a path before switching to the path in the current path group, using request-based device-mapper-multipath. This setting should be used on systems running current kernels. On systems running kernels older than 2.6.31, use rr_min_io . The default value is 1. rr_weight If set to priorities , then instead of sending rr_min_io requests to a path before calling path_selector to choose the path, the number of requests to send is determined by rr_min_io times the path's priority, as determined by the prio function. If set to uniform , all path weights are equal. The default value is uniform . no_path_retry A numeric value for this attribute specifies the number of times the system should attempt to use a failed path before disabling queuing. A value of fail indicates immediate failure, without queuing. A value of queue indicates that queuing should not stop until the path is fixed. The default value is 0. user_friendly_names If set to yes , specifies that the system should use the /etc/multipath/bindings file to assign a persistent and unique alias to the multipath, in the form of mpath n . If set to no , specifies that the system should use the WWID as the alias for the multipath. In either case, what is specified here will be overridden by any device-specific aliases you specify in the multipaths section of the configuration file. The default value is no . queue_without_daemon If set to no , the multipathd daemon will disable queuing for all devices when it is shut down. The default value is no . flush_on_last_del If set to yes , the multipathd daemon will disable queuing when the last path to a device has been deleted. The default value is no . max_fds Sets the maximum number of open file descriptors that can be opened by multipath and the multipathd daemon. This is equivalent to the ulimit -n command. As of the Red Hat Enterprise Linux 6.3 release, the default value is max , which sets this to the system limit from /proc/sys/fs/nr_open . For earlier releases, if this is not set the maximum number of open file descriptors is taken from the calling process; it is usually 1024. To be safe, this should be set to the maximum number of paths plus 32, if that number is greater than 1024. checker_timeout The timeout to use for prioritizers and path checkers that issue SCSI commands with an explicit timeout, in seconds. The default value is taken from sys/block/sd x /device/timeout . fast_io_fail_tmo The number of seconds the SCSI layer will wait after a problem has been detected on an FC remote port before failing I/O to devices on that remote port. This value should be smaller than the value of dev_loss_tmo . Setting this to off will disable the timeout. The default value is determined by the OS. The fast_io_fail_tmo option overrides the values of the recovery_tmo and replacement_timeout options. For details, see Section 4.6, "iSCSI and DM Multipath overrides" . dev_loss_tmo The number of seconds the SCSI layer will wait after a problem has been detected on an FC remote port before removing it from the system. Setting this to infinity will set this to 2147483647 seconds, or 68 years. The default value is determined by the OS. hw_string_match Each device configuration in the devices section of the multipath.conf file will either create its own device configuration or it will modify one of the built-in device configurations. If hw_string_match is set to yes , then if the vendor, product, and revision strings in a user's device configuration exactly match those strings in a built-in device configuration, the built-in configuration is modified by the options in the user's configuration. Otherwise, the user's device configuration is treated as a new configuration. If hw_string_match is set to no , a regular expression match is used instead of a string match. The hw_string_match parameter is set to no by default. retain_attached_hw_handler If this parameter is set to yes and the SCSI layer has already attached a hardware handler to the path device, multipath will not force the device to use the hardware_handler specified by the multipath.conf file. If the SCSI layer has not attached a hardware handler, multipath will continue to use its configured hardware handler as usual. The default value is no . detect_prio If this is set to yes , multipath will first check if the device supports ALUA, and if so it will automatically assign the device the alua prioritizer. If the device does not support ALUA, it will determine the prioritizer as it always does. The default value is no . uid_attribute Provides a unique path identifier. The default value is ID_SERIAL . force_sync (Red Hat Enterprise Linux Release 7.1 and later) If this is set to "yes", it prevents path checkers from running in async mode. This means that only one checker will run at a time. This is useful in the case where many multipathd checkers running in parallel causes significant CPU pressure. The default value is no . delay_watch_checks (Red Hat Enterprise Linux Release 7.2 and later) If set to a value greater than 0, the multipathd daemon will watch paths that have recently become valid for the specified number of checks. If they fail again while they are being watched, when they become valid they will not be used until they have stayed up for the number of consecutive checks specified with delay_wait_checks . This allows you to keep paths that may be unreliable from immediately being put back into use as soon as they come back online. The default value is no . delay_wait_checks (Red Hat Enterprise Linux Release 7.2 and later) If set to a value greater than 0, when a device that has recently come back online fails again within the number of checks specified with delay_watch_checks , the time it comes back online it will be marked and delayed and it will not be used until it has passed the number of checks specified in delay_wait_checks . The default value is no . ignore_new_boot_devs (Red Hat Enterprise Linux Release 7.2 and later) If set to yes , when the node is still in the initramfs file system during early boot, multipath will not create any devices whose WWIDs do not already exist in the initramfs copy of the /etc/multipath/wwids . This feature can be used for booting up during installation, when multipath would otherwise attempt to set itself up on devices that it did not claim when they first appeared by means of the udev rules. This parameter can be set to yes or no . If unset, it defaults to no . retrigger_tries , retrigger_delay (Red Hat Enterprise Linux Release 7.2 and later) The retrigger_tries and retrigger_delay parameters are used in conjunction to make multipathd retrigger uevents if udev failed to completely process the original ones, leaving the device unusable by multipath. The retrigger_tries parameter sets the number of times that multipath will try to retrigger a uevent if a device has not been completely set up. The retrigger_delay parameter sets the number of seconds between retries. Both of these options accept numbers greater than or equal to zero. Setting the retrigger_tries parameter to zero disables retries. Setting the retrigger_delay parameter to zero causes the uevent to be reissued on the loop of the path checker. If the retrigger_tries parameter is unset, it defaults to 3. If the retrigger_delay parameter is unset, it defaults to 10. new_bindings_in_boot (Red Hat Enterprise Linux Release 7.2 and later) The new_bindings_in_boot parameter is used to keep multipath from giving out a user_friendly_name in the initramfs file system that was already given out by the bindings file in the regular file system, an issue that can arise since the user_friendly_names bindings in the initramfs file system get synced with the bindings in the regular file system only when the initramfs file system is remade. When this parameter is set to no multipath will not create any new bindings in the initramfs file system. If a device does not already have a binding in the initramfs copy of /etc/multipath/bindings , multipath will use its WWID as an alias instead of giving it a user_friendly_name . Later in boot, after the node has mounted the regular filesystem, multipath will give out a user_friendly_name to the device. This parameter can be set to yes or no . If unset, it defaults to no . config_dir (Red Hat Enterprise Linux Release 7.2 and later) If set to anything other than "" , multipath will search this directory alphabetically for files ending in ".conf" and it will read configuration information from them, just as if the information were in the /etc/multipath.conf file. This allows you to have one main configuration that you share between machines in addition to a separate machine-specific configuration file or files. The config_dir parameter must either be "" or a fully qualified directory name. This parameter can be set only in the main /etc/multipath.conf file and not in one of the files specified in the config_dir file itself. The default value is /etc/multipath/conf.d . deferred_remove If set to yes , multipathd will do a deferred remove instead of a regular remove when the last path device has been deleted. This ensures that if a multipathed device is in use when a regular remove is performed and the remove fails, the device will automatically be removed when the last user closes the device. The default value is no . log_checker_err If set to once , multipathd logs the first path checker error at verbosity level 2. Any later errors are logged at verbosity level 3 until the device is restored. If it is set to always , multipathd always logs the path checker error at verbosity level 2. The default value is always . skip_kpartx (Red Hat Enterprise Linux Release 7.3 and later) If set to yes , kpartx will not automatically create partitions on the device. This allows users to create a multipath device without creating partitions, even if the device has a partition table. The default value of this option is no . max_sectors_kb (Red Hat Enterprise Linux Release 7.4 and later) Sets the max_sectors_kb device queue parameter to the specified value on all underlying paths of a multipath device before the multipath device is first activated. When a multipath device is created, the device inherits the max_sectors_kb value from the path devices. Manually raising this value for the multipath device or lowering this value for the path devices can cause multipath to create I/O operations larger than the path devices allow. Using the max_sectors_kb parameter is an easy way to set these values before a multipath device is created on top of the path devices and prevent invalid-sized I/O operations from being passed. If this parameter is not set by the user, the path devices have it set by their device driver, and the multipath device inherits it from the path devices. remove_retries (Red Hat Enterprise Linux Release 7.4 and later) Sets how may times multipath will retry removing a device that is in use. Between each attempt, multipath will sleep 1 second. The default value is 0, which means that multipath will not retry the remove. disable_changed_wwids (Red Hat Enterprise Linux Release 7.4 and later) If set to yes and the WWID of a path device changes while it is part of a multipath device, multipath will disable access to the path device until the WWID of the path is restored to the WWID of the multipath device. The default value is no , which does not check if a path's WWID has changed. detect_path_checker (Red Hat Enterprise Linux Release 7.4 and later) If set to yes , multipath will try to detect if the device supports ALUA. If so, the device will automatically use the tur path checker. If not, the path_checker will be selected as usual. The default value is no . reservation_key This is the service action reservation key used by mpathpersist . It must be set for all multipath devices using persistent reservations, and it must be the same as the RESERVATION KEY field of the PERSISTENT RESERVE OUT parameter list which contains an 8-byte value provided by the application client to the device server to identify the I_T nexus. If the --param-aptpl option is used when registering the key with mpathpersist , :aptpl must be appended to the end of the reservation key. As of Red Hat Enterprise Linux Release 7.5, this parameter can be set to file , which will store the RESERVATION KEY registered by mpath‐persist in the prkeys file. The multipathd daemon will then use this key to register additional paths as they appear. When the registration is removed, the RESERVATION KEY is removed from the prkeys file. It is unset by default. prkeys_file (Red Hat Enterprise Linux Release 7.5 and later) The full path name of the prkeys file, which is used by the multipathd daemon to keep track of the reservation key used for a specific WWID when the reservation_key parameter is set to file . The default value is /etc/multipath/prkeys . all_tg_pt (Red Hat Enterprise Linux Release 7.6 and later) If this option is set to yes , when mpathpersist registers keys it will treat a key registered from one host to one target port as going from one host to all target ports. This must be set to yes to successfully use mpathpersist on arrays that automatically set and clear registration keys on all target ports from a host, instead of per target port per host. The default value is no .
|
[
"defaults { user_friendly_names yes }",
"#defaults { polling_interval 10 path_selector \"round-robin 0\" path_grouping_policy multibus uid_attribute ID_SERIAL prio alua path_checker readsector0 rr_min_io 100 max_fds 8192 rr_weight priorities failback immediate no_path_retry fail user_friendly_names yes #}",
"defaults { user_friendly_names yes path_grouping_policy multibus }"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/dm_multipath/config_file_defaults
|
Chapter 12. Pools
|
Chapter 12. Pools 12.1. Introduction to Virtual Machine Pools A virtual machine pool is a group of virtual machines that are all clones of the same template and that can be used on demand by any user in a given group. Virtual machine pools allow administrators to rapidly configure a set of generalized virtual machines for users. Users access a virtual machine pool by taking a virtual machine from the pool. When a user takes a virtual machine from a pool, they are provided with any one of the virtual machines in the pool if any are available. That virtual machine will have the same operating system and configuration as that of the template on which the pool was based, but users may not receive the same member of the pool each time they take a virtual machine. Users can also take multiple virtual machines from the same virtual machine pool depending on the configuration of that pool. Virtual machine pools are stateless by default, meaning that virtual machine data and configuration changes are not persistent across reboots. However, the pool can be configured to be stateful, allowing changes made by a user to persist. However, if a user configures console options for a virtual machine taken from a virtual machine pool, those options will be set as the default for that user for that virtual machine pool. Note Virtual machines taken from a pool are not stateless when accessed from the Administration Portal. This is because administrators need to be able to write changes to the disk if necessary. In principle, virtual machines in a pool are started when taken by a user, and shut down when the user is finished. However, virtual machine pools can also contain pre-started virtual machines. Pre-started virtual machines are kept in an up state, and remain idle until they are taken by a user. This allows users to start using such virtual machines immediately, but these virtual machines will consume system resources even while not in use due to being idle.
| null |
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/chap-pools
|
Chapter 19. Changing a subscription service
|
Chapter 19. Changing a subscription service To manage the subscriptions, you can register a RHEL system with either Red Hat Subscription Management Server or Red Hat Satellite Server. If required, you can change the subscription service at a later point. To change the subscription service under which you are registered, unregister the system from the current service and then register it with a new service. To receive the system updates, register your system with either of the management servers. This section contains information about how to unregister your RHEL system from the Red Hat Subscription Management Server and Red Hat Satellite Server. Prerequisites You have registered your system with any one of the following: Red Hat Subscription Management Server Red Hat Satellite Server version 6.11 To receive the system updates, register your system with either of the management servers. 19.1. Unregistering from Subscription Management Server This section contains information about how to unregister a RHEL system from Red Hat Subscription Management Server, using a command line and the Subscription Manager user interface. 19.1.1. Unregistering using command line Use the unregister command to unregister a RHEL system from Red Hat Subscription Management Server. Procedure Run the unregister command as a root user, without any additional parameters. When prompted, provide a root password. The system is unregistered from the Subscription Management Server, and the status 'The system is currently not registered' is displayed with the Register button enabled. To continue uninterrupted services, re-register the system with either of the management services. If you do not register the system with a management service, you may fail to receive the system updates. For more information about registering a system, see Registering your system using the command line . Additional resources Using and Configuring Red Hat Subscription Manager 19.1.2. Unregistering using Subscription Manager user interface You can unregister a RHEL system from Red Hat Subscription Management Server by using Subscription Manager user interface. Procedure Log in to your system. From the top left-hand side of the window, click Activities . From the menu options, click the Show Applications icon. Click the Red Hat Subscription Manager icon, or enter Red Hat Subscription Manager in the search. Enter your administrator password in the Authentication Required dialog box. The Subscriptions window appears and displays the current status of Subscriptions, System Purpose, and installed products. Unregistered products display a red X. Authentication is required to perform privileged tasks on the system. Click the Unregister button. The system is unregistered from the Subscription Management Server, and the status 'The system is currently not registered' is displayed with the Register button enabled. To continue uninterrupted services, re-register the system with either of the management services. If you do not register the system with a management service, you may fail to receive the system updates. For more information about registering a system, see Registering your system using the Subscription Manager User Interface . Additional resources Using and Configuring Red Hat Subscription Manager 19.2. Unregistering from Satellite Server To unregister a Red Hat Enterprise Linux system from Satellite Server, remove the system from Satellite Server. For more information, see Removing a Host from Red Hat Satellite .
|
[
"subscription-manager unregister"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/automatically_installing_rhel/changing-a-subscripton-service_rhel-installer
|
Installing on Alibaba Cloud
|
Installing on Alibaba Cloud OpenShift Container Platform 4.18 Installing OpenShift Container Platform on Alibaba Cloud Red Hat OpenShift Documentation Team
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/installing_on_alibaba_cloud/index
|
Chapter 1. Red Hat JBoss Web Server Operator
|
Chapter 1. Red Hat JBoss Web Server Operator An Operator is a Kubernetes-native application that makes it easy to manage complex stateful applications in Kubernetes and OpenShift environments. Red Hat JBoss Web Server (JWS) provides an Operator to manage JWS for OpenShift images. You can use the JWS Operator to create, configure, manage, and seamlessly upgrade instances of web server applications in OpenShift. Operators include the following key concepts: The Operator Framework is a toolkit to manage Operators in an effective, automated, and scalable way. The Operator Framework consists of three main components: You can use OperatorHub to discover Operators that you want to install. You can use the Operator Lifecycle Manager (OLM) to install and manage Operators in your OpenShift cluster. You can use the Operator SDK if you want to develop your own custom Operators. An Operator group is an OLM resource that provides multitenant configuration to OLM-installed Operators. An Operator group selects target namespaces in which to generate role-based access control (RBAC) for all Operators that are deployed in the same namespace as the OperatorGroup object. Custom resource definitions (CRDs) are a Kubernetes extension mechanism that Operators use. CRDs allow the custom objects that an Operator manages to behave like native Kubernetes objects. The JWS Operator provides a set of CRD parameters that you can specify in custom resource files for web server applications that you want to deploy. This document describes how to install the JWS Operator, deploy an existing JWS image, and delete Operators from a cluster. This document also provides details of the CRD parameters that the JWS Operator provides. Note Before you follow the instructions in this guide, you must ensure that an OpenShift cluster is already installed and configured as a prerequisite. For more information about installing and configuring OpenShift clusters, see the OpenShift Container Platform Installing guide. For a faster but less detailed guide to deploying a prepared image or building an image from an existing image stream, see the JWS Operator QuickStart guide. Important Red Hat supports images for JWS 5.4 or later versions only. Additional resources Understanding Operators Installing and configuring OpenShift Container Platform clusters
| null |
https://docs.redhat.com/en/documentation/red_hat_jboss_web_server/6.0/html/red_hat_jboss_web_server_operator/con_jws-operator_jws-operator
|
5.2. Converting to an Ext3 File System
|
5.2. Converting to an Ext3 File System The tune2fs command converts an ext2 file system to ext3 . Note A default installation of Red Hat Enterprise Linux uses ext4 for all file systems. However, to convert ext2 to ext3, always use the e2fsck utility to check your file system before and after using tune2fs . Before trying to convert ext2 to ext3, back up all file systems in case any errors occur. In addition, Red Hat recommends creating a new ext3 file system and migrating data to it, instead of converting from ext2 to ext3 whenever possible. To convert an ext2 file system to ext3 , log in as root and type the following command in a terminal: block_device contains the ext2 file system to be converted. A valid block device can be one of two types of entries: A mapped device A logical volume in a volume group, for example, /dev/mapper/VolGroup00-LogVol02 . A static device A traditional storage volume, for example, /dev/ sdb X , where sdb is a storage device name and X is the partition number. Issue the df command to display mounted file systems.
|
[
"tune2fs -j block_device"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/s1-filesystem-ext3-convert
|
Chapter 8. mod_auth_mellon Apache Module
|
Chapter 8. mod_auth_mellon Apache Module The mod_auth_mellon is an authentication module for Apache. If your language/environment supports using Apache HTTPD as a proxy, then you can use mod_auth_mellon to secure your web application with SAML. For more details on this module see the mod_auth_mellon GitHub repo. Warning Red Hat build of Keycloak does not provide any official support to mod_auth_mellon. The instructions below are best-effort and may not be up-to-date. The chapter assumes that the server is a RHEL system. Although similar steps would be needed for other linux systems. We recommend that you stick to official mod_auth_mellon documentation for more details. To configure mod_auth_mellon you need the following files: An Identity Provider (IdP) entity descriptor XML file, which describes the connection to Red Hat build of Keycloak or another SAML IdP An SP entity descriptor XML file, which describes the SAML connections and configuration for the application you are securing. A private key PEM file, which is a text file in the PEM format that defines the private key the application uses to sign documents. A certificate PEM file, which is a text file that defines the certificate for your application. mod_auth_mellon-specific Apache HTTPD module configuration. If you have already defined and registered the client application within a realm on the Red Hat build of Keycloak application server, Red Hat build of Keycloak can generate all the files you need except the Apache HTTPD module configuration. Perform the following procedure to generate the Apache HTTPD module configuration. Procedure Go to the Installation page of your SAML client. Select the Mod Auth Mellon files option. Figure 8.1. mod_auth_mellon config download Click Download to download a ZIP file that contains the XML descriptor and PEM files you need. 8.1. Configuring mod_auth_mellon with Red Hat build of Keycloak There are two hosts involved: The host on which Red Hat build of Keycloak is running, which will be referred to as USDidp_host because Red Hat build of Keycloak is a SAML identity provider (IdP). The host on which the web application is running, which will be referred to as USDsp_host. In SAML an application using an IdP is called a service provider (SP). All of the following steps need to performed on USDsp_host with root privileges. 8.1.1. Installing the packages To install the necessary packages, you will need: Apache Web Server (httpd) Mellon SAML SP add-on module for Apache Tools to create X509 certificates To install the necessary packages, run this command: 8.1.2. Creating a configuration directory for Apache SAML It is advisable to keep configuration files related to Apache's use of SAML in one location. Create a new directory named saml2 located under the Apache configuration root /etc/httpd : 8.1.3. Configuring the Mellon Service Provider Configuration files for Apache add-on modules are located in the /etc/httpd/conf.d directory and have a file name extension of .conf . You need to create the /etc/httpd/conf.d/mellon.conf file and place Mellon's configuration directives in it. Mellon's configuration directives can roughly be broken down into two classes of information: Which URLs to protect with SAML authentication What SAML parameters will be used when a protected URL is referenced. Apache configuration directives typically follow a hierarchical tree structure in the URL space, which are known as locations. You need to specify one or more URL locations for Mellon to protect. You have flexibility in how you add the configuration parameters that apply to each location. You can either add all the necessary parameters to the location block or you can add Mellon parameters to a common location high up in the URL location hierarchy that specific protected locations inherit (or some combination of the two). Since it is common for an SP to operate in the same way no matter which location triggers SAML actions, the example configuration used here places common Mellon configuration directives in the root of the hierarchy and then specific locations to be protected by Mellon can be defined with minimal directives. This strategy avoids duplicating the same parameters for each protected location. This example has just one protected location: https://USDsp_host/private. To configure the Mellon service provider, perform the following procedure. Procedure Create the file /etc/httpd/conf.d/mellon.conf with this content: <Location / > MellonEnable info MellonEndpointPath /mellon/ MellonSPMetadataFile /etc/httpd/saml2/mellon_metadata.xml MellonSPPrivateKeyFile /etc/httpd/saml2/mellon.key MellonSPCertFile /etc/httpd/saml2/mellon.crt MellonIdPMetadataFile /etc/httpd/saml2/idp_metadata.xml </Location> <Location /private > AuthType Mellon MellonEnable auth Require valid-user </Location> Note Some of the files referenced in the code above are created in later steps. 8.1.4. Setting the SameSite value for the cookie used by mod_auth_mellon Browsers are planning to set the default value for the SameSite attribute for cookies to Lax . This setting means that cookies will be sent to applications only if the request originates in the same domain. This behavior can affect the SAML POST binding which may become non-functional. To preserve full functionality of the mod_auth_mellon module, we recommend setting the SameSite value to None for the cookie created by mod_auth_mellon . Not doing so may result in an inability to login using Red Hat build of Keycloak. To set the SameSite value to None , add the following configuration to <Location / > tag within your mellon.conf file. MellonSecureCookie On MellonCookieSameSite none The support for this configuration is available in the mod_auth_mellon module from version 0.16.0. 8.1.5. Creating the Service Provider metadata In SAML IdPs and SPs exchange SAML metadata, which is in XML format. The schema for the metadata is a standard, thus assuring participating SAML entities can consume each other's metadata. You need: Metadata for the IdP that the SP utilizes Metadata describing the SP provided to the IdP One of the components of SAML metadata is X509 certificates. These certificates are used for two purposes: Sign SAML messages so the receiving end can prove the message originated from the expected party. Encrypt the message during transport (seldom used because SAML messages typically occur on TLS-protected transports) You can use your own certificates if you already have a Certificate Authority (CA) or you can generate a self-signed certificate. For simplicity in this example a self-signed certificate is used. Because Mellon's SP metadata must reflect the capabilities of the installed version of mod_auth_mellon, must be valid SP metadata XML, and must contain an X509 certificate (whose creation can be obtuse unless you are familiar with X509 certificate generation) the most expedient way to produce the SP metadata is to use a tool included in the mod_auth_mellon package ( mellon_create_metadata.sh ). The generated metadata can always be edited later because it is a text file. The tool also creates your X509 key and certificate. SAML IdPs and SPs identify themselves using a unique name known as an EntityID. To use the Mellon metadata creation tool you need: The EntityID, which is typically the URL of the SP, and often the URL of the SP where the SP metadata can be retrieved The URL where SAML messages for the SP will be consumed, which Mellon calls the MellonEndPointPath. To create the SP metadata, perform the following procedure. Procedure Create a few helper shell variables: Invoke the Mellon metadata creation tool by running this command: Move the generated files to their destination (referenced in the /etc/httpd/conf.d/mellon.conf file created above): 8.1.6. Adding the Mellon Service Provider to the Red Hat build of Keycloak Identity Provider Assumption: The Red Hat build of Keycloak IdP has already been installed on the USDidp_host. Red Hat build of Keycloak supports multiple tenancy where all users, clients, and so on are grouped in what is called a realm. Each realm is independent of other realms. You can use an existing realm in your Red Hat build of Keycloak, but this example shows how to create a new realm called test_realm and use that realm. All these operations are performed using the Red Hat build of Keycloak Admin Console. You must have the admin username and password for USDidp_host to perform the following procedure. Procedure Open the Admin Console and log on by entering the admin username and password. After logging into the Admin Console, there will be an existing realm. When Red Hat build of Keycloak is first set up a root realm, master, is created by default. Any previously created realms are listed in the upper left corner of the Admin Console in a drop-down list. From the realm drop-down list select Add realm . In the Name field type test_realm and click Create . 8.1.6.1. Adding the Mellon Service Provider as a client of the realm In Red Hat build of Keycloak SAML SPs are known as clients. To add the SP we must be in the Clients section of the realm. Click the Clients menu item on the left and click the Import client button. In the Resource file field, provide the Mellon SP metadata file created above ( /etc/httpd/saml2/mellon_metadata.xml ). Depending on where your browser is running you might have to copy the SP metadata from USDsp_host to the machine on which your browser is running so the browser can find the file. Click Save . 8.1.6.2. Editing the Mellon SP client Use this procedure to set important client configuration parameters. Procedure Ensure Force POST Binding is On. Add paosResponse to the Valid Redirect URIs list: Copy the postResponse URL in Valid Redirect URIs and paste it into the empty add text fields just below the "+". Change postResponse to paosResponse`. (The paosResponse URL is needed for SAML ECP.) Click Save at the bottom. Many SAML SPs determine authorization based on a user's membership in a group. The Red Hat build of Keycloak IdP can manage user group information but it does not supply the user's groups unless the IdP is configured to supply it as a SAML attribute. Perform the following procedure to configure the IdP to supply the user's groups as a SAML attribute. Procedure Click the Client scopes tab of the client. Click the dedicated scope placed in the first row. In the Mappers page, click the Add mapper button and select By configuration . From the Mapper Type list select Group list . Set Name to group list . Set the SAML attribute name to groups . Click Save . The remaining steps are performed on USDsp_host. 8.1.6.3. Retrieving the Identity Provider metadata Now that you have created the realm on the IdP you need to retrieve the IdP metadata associated with it so the Mellon SP recognizes it. In the /etc/httpd/conf.d/mellon.conf file created previously, the MellonIdPMetadataFile is specified as /etc/httpd/saml2/idp_metadata.xml but until now that file has not existed on USDsp_host. Use this procedure to retrieve that file from the IdP. Procedure Use this command, substituting with the correct value for USDidp_host: Mellon is now fully configured. To run a syntax check for Apache configuration files, use this command: Note Configtest is equivalent to the -t argument to apachectl. If the configuration test shows any errors, correct them before proceeding. Restart the Apache server: You have now set up both Red Hat build of Keycloak as a SAML IdP in the test_realm and mod_auth_mellon as SAML SP protecting the URL USDsp_host/protected (and everything beneath it) by authenticating against the USDidp_host IdP.
|
[
"install httpd mod_auth_mellon mod_ssl openssl",
"mkdir /etc/httpd/saml2",
"<Location / > MellonEnable info MellonEndpointPath /mellon/ MellonSPMetadataFile /etc/httpd/saml2/mellon_metadata.xml MellonSPPrivateKeyFile /etc/httpd/saml2/mellon.key MellonSPCertFile /etc/httpd/saml2/mellon.crt MellonIdPMetadataFile /etc/httpd/saml2/idp_metadata.xml </Location> <Location /private > AuthType Mellon MellonEnable auth Require valid-user </Location>",
"MellonSecureCookie On MellonCookieSameSite none",
"fqdn=`hostname` mellon_endpoint_url=\"https://USD{fqdn}/mellon\" mellon_entity_id=\"USD{mellon_endpoint_url}/metadata\" file_prefix=\"USD(echo \"USDmellon_entity_id\" | sed 's/[^A-Za-z.]/_/g' | sed 's/__*/_/g')\"",
"/usr/libexec/mod_auth_mellon/mellon_create_metadata.sh USDmellon_entity_id USDmellon_endpoint_url",
"mv USD{file_prefix}.cert /etc/httpd/saml2/mellon.crt mv USD{file_prefix}.key /etc/httpd/saml2/mellon.key mv USD{file_prefix}.xml /etc/httpd/saml2/mellon_metadata.xml",
"curl -k -o /etc/httpd/saml2/idp_metadata.xml https://USDidp_host/realms/test_realm/protocol/saml/descriptor",
"apachectl configtest",
"systemctl restart httpd.service"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/26.0/html/securing_applications_and_services_guide/mod-auth-mellon-
|
Chapter 19. Additional introspection operations
|
Chapter 19. Additional introspection operations 19.1. Performing individual node introspection To perform a single introspection on an available node, run the following commands to set the node to management mode and perform the introspection: After the introspection completes, the node changes to an available state. 19.2. Performing node introspection after initial introspection After an initial introspection, all nodes enter an available state due to the --provide option. To perform introspection on all nodes after the initial introspection, set all nodes to a manageable state and run the bulk introspection command: After the introspection completes, all nodes change to an available state. 19.3. Performing network introspection for interface information Network introspection retrieves link layer discovery protocol (LLDP) data from network switches. The following commands show a subset of LLDP information for all interfaces on a node, or full information for a particular node and interface. This can be useful for troubleshooting. Director enables LLDP data collection by default. To get a list of interfaces on a node, run the following command: For example: To view interface data and switch port information, run the following command: For example: Retrieving hardware introspection details The Bare Metal service hardware-inspection-extras feature is enabled by default, and you can use it to retrieve hardware details for overcloud configuration. For more information about the inspection_extras parameter in the undercloud.conf file, see Configuring the Director . For example, the numa_topology collector is part of the hardware-inspection extras and includes the following information for each NUMA node: RAM (in kilobytes) Physical CPU cores and their sibling threads NICs associated with the NUMA node To retrieve the information listed above, substitute <UUID> with the UUID of the bare-metal node to complete the following command: The following example shows the retrieved NUMA information for a bare-metal node:
|
[
"(undercloud) USD openstack baremetal node manage [NODE UUID] (undercloud) USD openstack overcloud node introspect [NODE UUID] --provide",
"(undercloud) USD for node in USD(openstack baremetal node list --fields uuid -f value) ; do openstack baremetal node manage USDnode ; done (undercloud) USD openstack overcloud node introspect --all-manageable --provide",
"(undercloud) USD openstack baremetal introspection interface list [NODE UUID]",
"(undercloud) USD openstack baremetal introspection interface list c89397b7-a326-41a0-907d-79f8b86c7cd9 +-----------+-------------------+------------------------+-------------------+----------------+ | Interface | MAC Address | Switch Port VLAN IDs | Switch Chassis ID | Switch Port ID | +-----------+-------------------+------------------------+-------------------+----------------+ | p2p2 | 00:0a:f7:79:93:19 | [103, 102, 18, 20, 42] | 64:64:9b:31:12:00 | 510 | | p2p1 | 00:0a:f7:79:93:18 | [101] | 64:64:9b:31:12:00 | 507 | | em1 | c8:1f:66:c7:e8:2f | [162] | 08:81:f4:a6:b3:80 | 515 | | em2 | c8:1f:66:c7:e8:30 | [182, 183] | 08:81:f4:a6:b3:80 | 559 | +-----------+-------------------+------------------------+-------------------+----------------+",
"(undercloud) USD openstack baremetal introspection interface show [NODE UUID] [INTERFACE]",
"(undercloud) USD openstack baremetal introspection interface show c89397b7-a326-41a0-907d-79f8b86c7cd9 p2p1 +--------------------------------------+------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +--------------------------------------+------------------------------------------------------------------------------------------------------------------------+ | interface | p2p1 | | mac | 00:0a:f7:79:93:18 | | node_ident | c89397b7-a326-41a0-907d-79f8b86c7cd9 | | switch_capabilities_enabled | [u'Bridge', u'Router'] | | switch_capabilities_support | [u'Bridge', u'Router'] | | switch_chassis_id | 64:64:9b:31:12:00 | | switch_port_autonegotiation_enabled | True | | switch_port_autonegotiation_support | True | | switch_port_description | ge-0/0/2.0 | | switch_port_id | 507 | | switch_port_link_aggregation_enabled | False | | switch_port_link_aggregation_id | 0 | | switch_port_link_aggregation_support | True | | switch_port_management_vlan_id | None | | switch_port_mau_type | Unknown | | switch_port_mtu | 1514 | | switch_port_physical_capabilities | [u'1000BASE-T fdx', u'100BASE-TX fdx', u'100BASE-TX hdx', u'10BASE-T fdx', u'10BASE-T hdx', u'Asym and Sym PAUSE fdx'] | | switch_port_protocol_vlan_enabled | None | | switch_port_protocol_vlan_ids | None | | switch_port_protocol_vlan_support | None | | switch_port_untagged_vlan_id | 101 | | switch_port_vlan_ids | [101] | | switch_port_vlans | [{u'name': u'RHOS13-PXE', u'id': 101}] | | switch_protocol_identities | None | | switch_system_name | rhos-compute-node-sw1 | +--------------------------------------+------------------------------------------------------------------------------------------------------------------------+",
"openstack baremetal introspection data save <UUID> | jq .numa_topology",
"{ \"cpus\": [ { \"cpu\": 1, \"thread_siblings\": [ 1, 17 ], \"numa_node\": 0 }, { \"cpu\": 2, \"thread_siblings\": [ 10, 26 ], \"numa_node\": 1 }, { \"cpu\": 0, \"thread_siblings\": [ 0, 16 ], \"numa_node\": 0 }, { \"cpu\": 5, \"thread_siblings\": [ 13, 29 ], \"numa_node\": 1 }, { \"cpu\": 7, \"thread_siblings\": [ 15, 31 ], \"numa_node\": 1 }, { \"cpu\": 7, \"thread_siblings\": [ 7, 23 ], \"numa_node\": 0 }, { \"cpu\": 1, \"thread_siblings\": [ 9, 25 ], \"numa_node\": 1 }, { \"cpu\": 6, \"thread_siblings\": [ 6, 22 ], \"numa_node\": 0 }, { \"cpu\": 3, \"thread_siblings\": [ 11, 27 ], \"numa_node\": 1 }, { \"cpu\": 5, \"thread_siblings\": [ 5, 21 ], \"numa_node\": 0 }, { \"cpu\": 4, \"thread_siblings\": [ 12, 28 ], \"numa_node\": 1 }, { \"cpu\": 4, \"thread_siblings\": [ 4, 20 ], \"numa_node\": 0 }, { \"cpu\": 0, \"thread_siblings\": [ 8, 24 ], \"numa_node\": 1 }, { \"cpu\": 6, \"thread_siblings\": [ 14, 30 ], \"numa_node\": 1 }, { \"cpu\": 3, \"thread_siblings\": [ 3, 19 ], \"numa_node\": 0 }, { \"cpu\": 2, \"thread_siblings\": [ 2, 18 ], \"numa_node\": 0 } ], \"ram\": [ { \"size_kb\": 66980172, \"numa_node\": 0 }, { \"size_kb\": 67108864, \"numa_node\": 1 } ], \"nics\": [ { \"name\": \"ens3f1\", \"numa_node\": 1 }, { \"name\": \"ens3f0\", \"numa_node\": 1 }, { \"name\": \"ens2f0\", \"numa_node\": 0 }, { \"name\": \"ens2f1\", \"numa_node\": 0 }, { \"name\": \"ens1f1\", \"numa_node\": 0 }, { \"name\": \"ens1f0\", \"numa_node\": 0 }, { \"name\": \"eno4\", \"numa_node\": 0 }, { \"name\": \"eno1\", \"numa_node\": 0 }, { \"name\": \"eno3\", \"numa_node\": 0 }, { \"name\": \"eno2\", \"numa_node\": 0 } ] }"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/director_installation_and_usage/additional-introspection-operations
|
20.16.13. Video Devices
|
20.16.13. Video Devices A video device. To specify the video devices configuration settings, use a management tool to make the following changes to the domain XML: ... <devices> <video> <model type='vga' vram='8192' heads='1'> <acceleration accel3d='yes' accel2d='yes'/> </model> </video> </devices> ... Figure 20.58. Video devices The graphics element has a mandatory type attribute which takes the value "sdl", "vnc", "rdp" or "desktop" as explained below: Table 20.22. Graphical framebuffer elements Parameter Description video The video element is the container for describing video devices. For backwards compatibility, if no video is set but there is a graphics element in domain xml, then libvirt will add a default video according to the guest virtual machine type. If "ram" or "vram" are not supplied a default value is used. model This has a mandatory type attribute which takes the value vga , cirrus , vmvga , xen , vbox , or qxl depending on the hypervisor features available. You can also provide the amount of video memory in kibibytes (blocks of 1024 bytes) using vram and the number of figure with heads. acceleration If acceleration is supported it should be enabled using the accel3d and accel2d attributes in the acceleration element. address The optional address sub-element can be used to tie the video device to a particular PCI slot.
|
[
"<devices> <video> <model type='vga' vram='8192' heads='1'> <acceleration accel3d='yes' accel2d='yes'/> </model> </video> </devices>"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-section-libvirt-dom-xml-devices-video
|
Using the Hammer CLI tool
|
Using the Hammer CLI tool Red Hat Satellite 6.16 Administer Satellite or develop custom scripts by using Hammer, the Satellite command-line tool Red Hat Satellite Documentation Team [email protected]
| null |
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/using_the_hammer_cli_tool/index
|
Chapter 12. Internationalization
|
Chapter 12. Internationalization 12.1. Red Hat Enterprise Linux 8 international languages Red Hat Enterprise Linux 8 supports the installation of multiple languages and the changing of languages based on your requirements. East Asian Languages - Japanese, Korean, Simplified Chinese, and Traditional Chinese. European Languages - English, German, Spanish, French, Italian, Portuguese, and Russian. The following table lists the fonts and input methods provided for various major languages. Language Default Font (Font Package) Input Methods English dejavu-sans-fonts French dejavu-sans-fonts German dejavu-sans-fonts Italian dejavu-sans-fonts Russian dejavu-sans-fonts Spanish dejavu-sans-fonts Portuguese dejavu-sans-fonts Simplified Chinese google-noto-sans-cjk-ttc-fonts, google-noto-serif-cjk-ttc-fonts ibus-libpinyin, libpinyin Traditional Chinese google-noto-sans-cjk-ttc-fonts, google-noto-serif-cjk-ttc-fonts ibus-libzhuyin, libzhuyin Japanese google-noto-sans-cjk-ttc-fonts, google-noto-serif-cjk-ttc-fonts ibus-kkc, libkkc Korean google-noto-sans-cjk-ttc-fonts, google-noto-serif-cjk-ttc-fonts ibus-hangul, libhangul 12.2. Notable changes to internationalization in RHEL 8 RHEL 8 introduces the following changes to internationalization compared to RHEL 7: Support for the Unicode 11 computing industry standard has been added. Internationalization is distributed in multiple packages, which allows for smaller footprint installations. For more information, see Using langpacks . A number of glibc locales have been synchronized with Unicode Common Locale Data Repository (CLDR).
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.8_release_notes/internationalization
|
Chapter 2. Creating and managing remediation playbooks in Insights
|
Chapter 2. Creating and managing remediation playbooks in Insights The workflow to create playbooks is similar in each of the services in Insights for Red Hat Enterprise Linux. In general, you will fix one or more issues on a system or group of systems. Playbooks focus on issues identified by Insights services. A recommended practice for playbooks is to include systems of the same RHEL major/minor versions because the resolutions will be compatible. 2.1. Creating a playbook to remediate a CVE vulnerability on RHEL systems Create a remediation playbook in the Red Hat Insights vulnerability service. The workflow to create a playbook is similar for other services in Insights for Red Hat Enterprise Linux. Prerequisites You are logged into the Red Hat Hybrid Cloud Console. Note No enhanced User Access permissions are required to create remediation playbooks. Procedure Navigate to the Security > Vulnerability > CVEs page. Set filters as needed and click on a CVE. Scroll down to view affected systems. Select systems to include in a remediation playbook by clicking the box to the left of the system ID. Note Include systems of the same RHEL major/minor version, which you can do by filtering the list of affected systems. Click the Remediate button. Select whether to add the remediations to an existing or new playbook and take the following action: Click Add to existing playbook and select the desired playbook from the dropdown list, OR Click Create new playbook and add a playbook name. Click . Review the systems to include in the playbook, then click . Review the information in the Remediation review summary. By default, autoreboot is enabled. You can disable this option by clicking Turn off autoreboot . Click Submit . Verification step Navigate to Automation Toolkit > Remediations . Search for your playbook. You should see your playbook. 2.1.1. Creating playbooks to remediate CVEs with security rules when recommended and alternate resolution options exist Most CVEs in Red Hat Insights for RHEL will have one remediation option for you to use to resolve an issue. Remediating a CVE with security rules might include more than one resolution a recommended and one or more alternate resolutions. The workflow to create playbooks for CVEs that have one or more resolution options is similar to the remediation steps in the advisor service. For more information about security rules, see Security rules , and Filtering lists of systems exposed to security rules in Assessing and Monitoring Security Vulnerabilities on RHEL Systems with FedRAMP . Prerequisites You are logged into the Red Hat Hybrid Cloud Console. Note You do not need enhanced User Access permissions to create remediation playbooks. Procedure Navigate to Security > Vulnerability > CVEs . Set filters if needed (for example, filter to see CVEs with security rules to focus on issues that have elevated risk associated with them). Or, click the CVEs with security rules tile on the dashbar. Both options show in the example image. Click a CVE in the list. Scroll to view affected systems, and select systems you want to include in a remediation playbook by clicking the box to the left of the system ID on the Review systems page. (Selecting one or more systems activates the Remediate button.) Note Recommended: Include systems of the same RHEL major or minor version by filtering the list of affected systems. Click Remediate . Decide whether to add the remediations to an existing or new playbook by taking one of the following actions: Choose Add to existing playbook and select the desired playbook from the dropdown list, OR Choose Create new playbook , and add a playbook name. For this example, HCCDOC-392. Click . A list of systems shows on the screen. Review the systems to include in the playbook (deselect any systems that you do not want to include). Click to see the Review and edit actions page, which shows you options to remediate the CVE. The number of items to remediate can vary. You will also see additional information (that you can expand and collapse) about the CVE, such as: Action: Shows the CVE ID. Resolution: Displays the recommended resolution for the CVE. Shows if you have alternate resolution options. Reboot required: Shows whether you must reboot your systems. Systems: Shows the number of systems you are remediating. On the Review and edit actions page, choose one of two options to finish creating your playbook: Option 1: To review all of the recommended and alternative remediation options available (and choose one of those options): Select Review and/or change the resolution steps for this 1 action or similar based on your actual options. Click . On the Choose action: <CVE information> page, click a tile to select your preferred remediation option. The bottom edge of the tile highlights when you select it. The recommended solution is highlighted by default. Click . Option 2: To accept all recommended remediations: Choose Accept all recommended resolutions steps for all actions . Review information about your selections and change options for autoreboot of systems on the Remediations review page. The page shows you the: Issues you are adding to your playbook. Options for changing system autoreboot requirements. Summary about CVEs and resolution options to fix them. Optional. Change autoreboot options on the Remediation review page, if needed. (Autoreboot is enabled by default, but your settings might vary based on your remediation options.) Click Submit . A notification displays that shows the number of remediation actions added to your playbook, and other information about your playbook. Verification step Navigate to Automation Toolkit > Remediations . Search for your playbook. 2.2. Managing remediation playbooks in Insights for Red Hat Enterprise Linux You can download, archive, and delete existing remediation playbooks for your organization. The following procedures describe how to perform common playbook-management tasks. Prerequisites You are logged into the Red Hat Hybrid Cloud Console. Note No enhanced permissions are required to view, edit, or download information about existing playbooks. 2.2.1. Downloading a remediation playbook Use the following procedure to download a remediation playbook from the Insights for Red Hat Enterprise Linux application. Procedure Navigate to Automation Toolkit > Remediations . Locate the playbook you want to manage and click on the name of the playbook. The playbook details are visible. Click the Download playbook button to download the playbook YAML file to your local drive. 2.2.2. Archiving a remediation playbook You can archive a remediation playbook that is no longer needed, but the details of which you want to preserve. Procedure Navigate to Automation Toolkit > Remediations . Locate the playbook you want to archive. Click on the options icon (...) and select Archive playbook . The playbook is archived. 2.2.3. Viewing archived remediation playbooks You can view archived remediation playbooks in Insights for Red Hat Enterprise Linux. Procedure Navigate to Automation Toolkit > Remediations . Click the More options icon that is to the right of the Download playbook button and select Show archived playbooks. 2.2.4. Deleting a remediation playbook You can delete a playbooks that is no longer needed. Procedure Navigate to Automation Toolkit > Remediations . Locate and click on the name of the playbook you want to delete. On the playbook details page, click the More options icon and select Delete . 2.2.5. Monitoring remediation status You can view the remediation status for each playbook. The status information tells you the results of the latest activity and provides a summary of all activity for that playbook. You can also view log information. Prerequisites You are logged into the Red Hat Hybrid Cloud Console. Procedure Navigate to Automation Toolkit > Remediations . The page displays a list of remediation playbooks. Click on the name of a playbook. From the Actions tab, click any item in the Status column to view a pop-up box with the status of the resolution. To monitor the status of a playbook in the Satellite web UI, see Monitoring Remote Jobs in the Red Hat Satellite Managing Hosts guide.
| null |
https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/red_hat_insights_remediations_guide_with_fedramp/creating-managing-playbooks_red-hat-insights-remediation-guide
|
Preface
|
Preface Preface
| null |
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/kafka_configuration_properties/preface
|
Policy APIs
|
Policy APIs OpenShift Container Platform 4.15 Reference guide for policy APIs Red Hat OpenShift Documentation Team
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/policy_apis/index
|
Making open source more inclusive
|
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
| null |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/director_installation_and_usage/making-open-source-more-inclusive
|
Updating OpenShift Data Foundation
|
Updating OpenShift Data Foundation Red Hat OpenShift Data Foundation 4.17 Instructions for cluster and storage administrators regarding upgrading Red Hat Storage Documentation Team Abstract This document explains how to update versions of Red Hat OpenShift Data Foundation. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug . Chapter 1. Overview of the OpenShift Data Foundation update process This chapter helps you to upgrade between the minor releases and z-streams for all Red Hat OpenShift Data Foundation deployments (Internal, Internal-Attached and External). The upgrade process remains the same for all deployments. You can upgrade OpenShift Data Foundation and its components, either between minor releases like 4.16 and 4.17, or between z-stream updates like 4.16.0 and 4.16.1 by enabling automatic updates (if not done so during operator installation) or performing manual updates. When a new z-stream release becomes available, the upgrade process triggers automatically if the update strategy was set to Automatic. Extended Update Support (EUS) EUS to EUS upgrade in OpenShift Data Foundation is sequential and it is aligned with OpenShift upgrade. For more information, see Performing an EUS-to-EUS update and EUS-to-EUS update for layered products and Operators installed through Operator Lifecycle Manager . For EUS upgrade of OpenShift Container Platform and OpenShift Data Foundation, make sure that OpenShift Data Foundation is upgraded along with OpenShift Container Platform and compatibility between OpenShift Data Foundation and OpenShift Container Platform is always maintained. Example workflow of EUS upgrade: Pause the worker machine pools. Update OpenShift <4.y> -> OpenShift <4.y+1>. Update OpenShift Data Foundation <4.y> -> OpenShift Data Foundation <4.y+1>. Update OpenShift <4.y+1> -> OpenShift <4.y+2>. Update to OpenShift Data Foundation <4.y+2>. Unpause the worker machine pools. Note You can update to ODF <4.y+2> either before or after worker machine pools are unpaused. Important When you update OpenShift Data Foundation in external mode, make sure that the Red Had Ceph Storage and OpenShift Data Foundation versions are compatible. For more information about supported Red Had Ceph Storage version in external mode, refer to Red Hat OpenShift Data Foundation Supportability and Interoperability Checker . Provide the required OpenShift Data Foundation version in the checker to see the supported Red Had Ceph version corresponding to the version in use. You also need to upgrade the different parts of Red Hat OpenShift Data Foundation in the following order for both internal and external mode deployments: Update OpenShift Container Platform according to the Updating clusters documentation for OpenShift Container Platform. Update Red Hat OpenShift Data Foundation. To prepare a disconnected environment for updates , see Operators guide to using Operator Lifecycle Manager on restricted networks to be able to update OpenShift Data Foundation as well as Local Storage Operator when in use. For updating between minor releases , see Updating Red Hat OpenShift Data Foundation 4.14 to 4.15 . For updating between z-stream releases , see Updating Red Hat OpenShift Data Foundation 4.15.x to 4.15.y . For updating external mode deployments , you must also perform the steps from section Updating the Red Hat OpenShift Data Foundation external secret . If you use local storage, then update the Local Storage operator . See Checking for Local Storage Operator deployments if you are unsure. Important If you have an existing setup of OpenShift Data Foundation 4.12 with disaster recovery (DR) enabled, ensure to update all your clusters in the environment at the same time and avoid updating a single cluster. This is to avoid any potential issues and maintain best compatibility. It is also important to maintain consistency across all OpenShift Data Foundation DR instances. Update considerations Review the following important considerations before you begin. The Red Hat OpenShift Container Platform version is the same as Red Hat OpenShift Data Foundation. See the Interoperability Matrix for more information about supported combinations of OpenShift Container Platform and Red Hat OpenShift Data Foundation. To know whether your cluster was deployed in internal or external mode, refer to the knowledgebase article on How to determine if ODF cluster has storage in internal or external mode . The Local Storage Operator is fully supported only when the Local Storage Operator version matches the Red Hat OpenShift Container Platform version. Important The Multicloud Object Gateway only has a single copy of the database (NooBaa DB). This means if NooBaa DB PVC gets corrupted and we are unable to recover it, can result in total data loss of applicative data residing on the Multicloud Object Gateway. Because of this, Red Hat recommends taking a backup of NooBaa DB PVC regularly. If NooBaa DB fails and cannot be recovered, then you can revert to the latest backed-up version. For instructions on backing up your NooBaa DB, follow the steps in this knowledgabase article . Chapter 2. OpenShift Data Foundation upgrade channels and releases In OpenShift Container Platform 4.1, Red Hat introduced the concept of channels for recommending the appropriate release versions for cluster upgrades. By controlling the pace of upgrades, these upgrade channels allow you to choose an upgrade strategy. As OpenShift Data Foundation gets deployed as an operator in OpenShift Container Platform, it follows the same strategy to control the pace of upgrades by shipping the fixes in multiple channels. Upgrade channels are tied to a minor version of OpenShift Data Foundation. For example, OpenShift Data Foundation 4.17 upgrade channels recommend upgrades within 4.17. Upgrades to future releases is not recommended. This strategy ensures that administrators can explicitly decide to upgrade to the minor version of OpenShift Data Foundation. Upgrade channels control only release selection and do not impact the version of the cluster that you install; the odf-operator decides the version of OpenShift Data Foundation to be installed. By default, it always installs the latest OpenShift Data Foundation release maintaining the compatibility with OpenShift Container Platform. So, on OpenShift Container Platform 4.17, OpenShift Data Foundation 4.17 will be the latest version which can be installed. OpenShift Data Foundation upgrades are tied to the OpenShift Container Platform upgrade to ensure that compatibility and interoperability are maintained with the OpenShift Container Platform. For OpenShift Data Foundation 4.17, OpenShift Container Platform 4.17 and 4.17 (when generally available) are supported. OpenShift Container Platform 4.17 is supported to maintain forward compatibility of OpenShift Data Foundation with OpenShift Container Platform. Keep the OpenShift Data Foundation version the same as OpenShift Container Platform in order to get the benefit of all the features and enhancements in that release. Important Due to fundamental Kubernetes design, all OpenShift Container Platform updates between minor versions must be serialized. You must update from OpenShift Container Platform 4.15 to 4.16 and then to 4.17. You cannot update from OpenShift Container Platform 4.16 to 4.17 directly. For more information, see Preparing to perform an EUS-to-EUS update of the Updating clusters guide in OpenShift Container Platform documentation. OpenShift Data Foundation 4.17 offers the following upgrade channel: stable-4.17 stable-4.16 stable-4.17 channel Once a new version is Generally Available, the stable channel corresponding to the minor version gets updated with the new image which can be used to upgrade. You can use the stable-4.17 channel to upgrade from OpenShift Data Foundation 4.16 and upgrades within 4.17. stable-4.16 You can use the stable-4.16 channel to upgrade from OpenShift Data Foundation 4.15 and upgrades within 4.16. Chapter 3. Updating Red Hat OpenShift Data Foundation 4.16 to 4.17 This chapter helps you to upgrade between the minor releases for all Red Hat OpenShift Data Foundation deployments (Internal, Internal-Attached and External). The upgrade process remains the same for all deployments. The Only difference is what gets upgraded and what's not. For Internal and Internal-attached deployments, upgrading OpenShift Data Foundation upgrades all OpenShift Data Foundation services including the backend Red Hat Ceph Storage (RHCS) cluster. For External mode deployments, upgrading OpenShift Data Foundation only upgrades the OpenShift Data Foundation service while the backend Ceph storage cluster remains untouched and needs to be upgraded separately. You must upgrade Red Hat Ceph Storage along with OpenShift Data Foundation to get new feature support, security fixes, and other bug fixes. As there is no dependency on RHCS upgrade, you can upgrade the OpenShift Data Foundation operator first followed by RHCS upgrade or vice-versa. For more information about RHCS releases, see the knowledgebase solution, solution . Important Upgrading to 4.17 directly from any version older than 4.16 is not supported. Prerequisites Ensure that the OpenShift Container Platform cluster has been updated to the latest stable release of version 4.17.X, see Updating Clusters . Ensure that the OpenShift Data Foundation cluster is healthy and data is resilient. Navigate to Storage -> Data Foundation -> Storage Systems tab and then click on the storage system name. Check for the green tick on the status card of both Overview - Block and File and Object tabs. Green tick indicates that the storage cluster , object service and data resiliency are all healthy. Ensure that all OpenShift Data Foundation Pods, including the operator pods, are in Running state in the openshift-storage namespace. To view the state of the pods, on the OpenShift Web Console, click Workloads -> Pods . Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Ensure that you have sufficient time to complete the OpenShift Data Foundation update process, as the update time varies depending on the number of OSDs that run in the cluster. Prerequisite relevant only for OpenShift Data Foundation deployments on AWS using AWS Security Token Service (STS) Add another entry in the trust policy for noobaa-core account as follows: Log into AWS web console where the AWS role resides using http://console.aws.amazon.com/ . Enter the IAM management tool and click Roles . Find the name of the role created for AWS STS to support Multicloud Object Gateway (MCG) authentication using the following command in OpenShift CLI: Search for the role name that you obtained from the step in the tool and click on the role name. Under the role summary, click Trust relationships . In the Trusted entities tab, click Edit trust policy on the right. Under the "Action": "sts:AssumeRoleWithWebIdentity" field, there are two fields to enable access for two NooBaa service accounts noobaa and noobaa-endpoint . Add another entry for the core pod's new service account name, system:serviceaccount:openshift-storage:noobaa-core . Click Update policy at the bottom right of the page. The update might take about 5 minutes to get in place. Procedure On the OpenShift Web Console, navigate to Operators -> Installed Operators . Select openshift-storage project. Click the OpenShift Data Foundation operator name. Click the Subscription tab and click the link under Update Channel . Select the stable-4.17 update channel and Save it. If the Upgrade status shows requires approval , click on requires approval . On the Install Plan Details page, click Preview Install Plan . Review the install plan and click Approve . Wait for the Status to change from Unknown to Created . Navigate to Operators -> Installed Operators . Select the openshift-storage project. Wait for the OpenShift Data Foundation Operator Status to change to Up to date . After the operator is successfully upgraded, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. Note After upgrading, if your cluster has five or more nodes, racks, or rooms, and when there are five or more number of failure domains present in the deployment, you can configure Ceph monitor counts based on the number of racks or zones. An alert is displayed in the notification panel or Alert Center of the OpenShift Web Console to indicate the option to increase the number of Ceph monitor counts. You can use the Configure option in the alert to configure the Ceph monitor counts. For more information, see Resolving low Ceph monitor count alert . Verification steps Check the Version below the OpenShift Data Foundation name and check the operator status. Navigate to Operators -> Installed Operators and select the openshift-storage project. When the upgrade completes, the version updates to a new version number for OpenShift Data Foundation and status changes to Succeeded with a green tick. Verify that the OpenShift Data Foundation cluster is healthy and data is resilient. Navigate to Storage -> Data Foundation -> Storage Systems tab and then click on the storage system name. Check for the green tick on the status card of Overview- Block and File and Object tabs. Green tick indicates that the storage cluster, object service and data resiliency is healthy. If verification steps fail, contact Red Hat Support . Important After updating external mode deployments, you must also update the external secret. For instructions, see Updating the OpenShift Data Foundation external secret . Additional Resources If you face any issues while updating OpenShift Data Foundation, see the Commonly required logs for troubleshooting section in the Troubleshooting guide . Chapter 4. Updating Red Hat OpenShift Data Foundation 4.17.x to 4.17.y This chapter helps you to upgrade between the z-stream release for all Red Hat OpenShift Data Foundation deployments (Internal, Internal-Attached and External). The upgrade process remains the same for all deployments. The Only difference is what gets upgraded and what's not. For Internal and Internal-attached deployments, upgrading OpenShift Data Foundation upgrades all OpenShift Data Foundation services including the backend Red Hat Ceph Storage (RHCS) cluster. For External mode deployments, upgrading OpenShift Data Foundation only upgrades the OpenShift Data Foundation service while the backend Ceph storage cluster remains untouched and needs to be upgraded separately. Hence, we recommend upgrading RHCS along with OpenShift Data Foundation in order to get new feature support, security fixes, and other bug fixes. Since we do not have a strong dependency on RHCS upgrade, you can upgrade the OpenShift Data Foundation operator first followed by RHCS upgrade or vice-versa. See solution to know more about RHCS releases. When a new z-stream release becomes available, the upgrade process triggers automatically if the update strategy was set to Automatic . If the update strategy is set to Manual then use the following procedure. Prerequisites Ensure that the OpenShift Container Platform cluster has been updated to the latest stable release of version 4.17.X, see Updating Clusters . Ensure that the OpenShift Data Foundation cluster is healthy and data is resilient. Navigate to Storage -> Data Foundation -> Storage Systems tab and then click on the storage system name. Check for the green tick on the status card of Overview - Block and File and Object tabs. Green tick indicates that the storage cluster, object service and data resiliency is healthy. Ensure that all OpenShift Data Foundation Pods, including the operator pods, are in Running state in the openshift-storage namespace. To view the state of the pods, on the OpenShift Web Console, click Workloads -> Pods . Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Ensure that you have sufficient time to complete the OpenShift Data Foundation update process, as the update time varies depending on the number of OSDs that run in the cluster. Procedure On the OpenShift Web Console, navigate to Operators -> Installed Operators . Select openshift-storage project. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Click the OpenShift Data Foundation operator name. Click the Subscription tab. If the Upgrade Status shows require approval , click on requires approval link. On the InstallPlan Details page, click Preview Install Plan . Review the install plan and click Approve . Wait for the Status to change from Unknown to Created . After the operator is successfully upgraded, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. Verification steps Check the Version below the OpenShift Data Foundation name and check the operator status. Navigate to Operators -> Installed Operators and select the openshift-storage project. When the upgrade completes, the version updates to a new version number for OpenShift Data Foundation and status changes to Succeeded with a green tick. Verify that the OpenShift Data Foundation cluster is healthy and data is resilient. Navigate to Storage -> Data Foundation -> Storage Systems tab and then click on the storage system name. Check for the green tick on the status card of Overview - Block and File and Object tabs. Green tick indicates that the storage cluster, object service and data resiliency is healthy If verification steps fail, contact Red Hat Support . Chapter 5. Changing the update approval strategy To ensure that the storage system gets updated automatically when a new update is available in the same channel, we recommend keeping the update approval strategy to Automatic . Changing the update approval strategy to Manual will need manual approval for each upgrade. Procedure Navigate to Operators -> Installed Operators . Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Click on OpenShift Data Foundation operator name Go to the Subscription tab. Click on the pencil icon for changing the Update approval . Select the update approval strategy and click Save . Verification steps Verify that the Update approval shows the newly selected approval strategy below it. Chapter 6. Updating the OpenShift Data Foundation external secret Update the OpenShift Data Foundation external secret after updating to the latest version of OpenShift Data Foundation. Note Updating the external secret is not required for batch updates. For example, when updating from OpenShift Data Foundation 4.17.x to 4.17.y. Prerequisites Update the OpenShift Container Platform cluster to the latest stable release of 4.17.z, see Updating Clusters . Ensure that the OpenShift Data Foundation cluster is healthy and the data is resilient. Navigate to Storage -> Data Foundation -> Storage Systems tab and then click on the storage system name. On the Overview - Block and File tab, check the Status card and confirm that the Storage cluster has a green tick indicating it is healthy. Click the Object tab and confirm Object Service and Data resiliency has a green tick indicating it is healthy. The RADOS Object Gateway is only listed in case RADOS Object Gateway endpoint details are included while deploying OpenShift Data Foundation in external mode. Red Hat Ceph Storage must have a Ceph dashboard installed and configured. Procedure Download the ceph-external-cluster-details-exporter.py python script that matches your OpenShift Data Foundation version. Update permission caps on the external Red Hat Ceph Storage cluster by running ceph-external-cluster-details-exporter.py on any client node in the external Red Hat Ceph Storage cluster. You may need to ask your Red Hat Ceph Storage administrator to do this. The updated permissions for the user are set as: Run the previously downloaded python script and save the JSON output that gets generated from the external Red Hat Ceph Storage cluster. Run the previously downloaded python script: Note Make sure to use all the flags that you used in the original deployment including any optional argument that you have used. Ensure that all the parameters, including the optional arguments, except for monitoring-endpoint and monitoring-endpoint-port , are the same as that you used during the original deployment of OpenShift Data Foundation in external mode. --rbd-data-pool-name Is a mandatory parameter used for providing block storage in OpenShift Data Foundation. --rgw-endpoint Is optional. Provide this parameter if object storage is to be provisioned through Ceph RADOS Gateway for OpenShift Data Foundation. Provide the endpoint in the following format: <ip_address>:<port> . --monitoring-endpoint Is optional. It accepts comma separated list of IP addresses of active and standby mgrs reachable from the OpenShift Container Platform cluster. If not provided, the value is automatically populated. --monitoring-endpoint-port Is optional. It is the port associated with the ceph-mgr Prometheus exporter specified by --monitoring-endpoint . If not provided, the value is automatically populated. --run-as-user The client name used during OpenShift Data Foundation cluster deployment. Use the default client name client.healthchecker if a different client name was not set. Additional flags: rgw-pool-prefix (Optional) The prefix of the RGW pools. If not specified, the default prefix is default . rgw-tls-cert-path (Optional) The file path of the RADOS Gateway endpoint TLS certificate. rgw-skip-tls (Optional) This parameter ignores the TLS certification validation when a self-signed certificate is provided (NOT RECOMMENDED). ceph-conf (Optional) The name of the Ceph configuration file. cluster-name (Optional) The Ceph cluster name. output (Optional) The file where the output is required to be stored. cephfs-metadata-pool-name (Optional) The name of the CephFS metadata pool. cephfs-data-pool-name (Optional) The name of the CephFS data pool. cephfs-filesystem-name (Optional) The name of the CephFS filesystem. rbd-metadata-ec-pool-name (Optional) The name of the erasure coded RBD metadata pool. dry-run (Optional) This parameter helps to print the executed commands without running them. Save the JSON output generated after running the script in the step. Example output: Upload the generated JSON file. Log in to the OpenShift Web Console. Click Workloads -> Secrets . Set project to openshift-storage . Click rook-ceph-external-cluster-details . Click Actions (...) -> Edit Secret . Click Browse and upload the JSON file. Click Save . Verification steps To verify that the OpenShift Data Foundation cluster is healthy and data is resilient, navigate to Storage -> Data Foundation -> Storage Systems tab and then click on the storage system name. On the Overview -> Block and File tab, check the Details card to verify that the RHCS dashboard link is available and also check the Status card to confirm that the Storage Cluster has a green tick indicating it is healthy. Click the Object tab and confirm Object Service and Data resiliency has a green tick indicating it is healthy. The RADOS Object Gateway is only listed in case RADOS Object Gateway endpoint details are included while deploying OpenShift Data Foundation in external mode. If verification steps fail, contact Red Hat Support .
|
[
"oc get deployment noobaa-operator -o yaml -n openshift-storage | grep ROLEARN -A1 value: arn:aws:iam::123456789101:role/your-role-name-here",
"oc get csv USD(oc get csv -n openshift-storage | grep rook-ceph-operator | awk '{print USD1}') -n openshift-storage -o jsonpath='{.metadata.annotations.externalClusterScript}'| base64 --decode > ceph-external-cluster-details-exporter.py",
"python3 ceph-external-cluster-details-exporter.py --upgrade",
"client.csi-cephfs-node key: AQCYz0piYgu/IRAAipji4C8+Lfymu9vOrox3zQ== caps: [mds] allow rw caps: [mgr] allow rw caps: [mon] allow r, allow command 'osd blocklist' caps: [osd] allow rw tag cephfs = client.csi-cephfs-provisioner key: AQCYz0piDUMSIxAARuGUyhLXFO9u4zQeRG65pQ== caps: [mgr] allow rw caps: [mon] allow r, allow command 'osd blocklist' caps: [osd] allow rw tag cephfs metadata=* client.csi-rbd-node key: AQCYz0pi88IKHhAAvzRN4fD90nkb082ldrTaHA== caps: [mon] profile rbd, allow command 'osd blocklist' caps: [osd] profile rbd client.csi-rbd-provisioner key: AQCYz0pi6W8IIBAAgRJfrAW7kZfucNdqJqS9dQ== caps: [mgr] allow rw caps: [mon] profile rbd, allow command 'osd blocklist' caps: [osd] profile rbd",
"python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name <rbd block pool name> --monitoring-endpoint <ceph mgr prometheus exporter endpoint> --monitoring-endpoint-port <ceph mgr prometheus exporter port> --rgw-endpoint <rgw endpoint> --run-as-user <ocs_client_name> [optional arguments]",
"[{\"name\": \"rook-ceph-mon-endpoints\", \"kind\": \"ConfigMap\", \"data\": {\"data\": \"xxx.xxx.xxx.xxx:xxxx\", \"maxMonId\": \"0\", \"mapping\": \"{}\"}}, {\"name\": \"rook-ceph-mon\", \"kind\": \"Secret\", \"data\": {\"admin-secret\": \"admin-secret\", \"fsid\": \"<fs-id>\", \"mon-secret\": \"mon-secret\"}}, {\"name\": \"rook-ceph-operator-creds\", \"kind\": \"Secret\", \"data\": {\"userID\": \"<user-id>\", \"userKey\": \"<user-key>\"}}, {\"name\": \"rook-csi-rbd-node\", \"kind\": \"Secret\", \"data\": {\"userID\": \"csi-rbd-node\", \"userKey\": \"<user-key>\"}}, {\"name\": \"ceph-rbd\", \"kind\": \"StorageClass\", \"data\": {\"pool\": \"<pool>\"}}, {\"name\": \"monitoring-endpoint\", \"kind\": \"CephCluster\", \"data\": {\"MonitoringEndpoint\": \"xxx.xxx.xxx.xxxx\", \"MonitoringPort\": \"xxxx\"}}, {\"name\": \"rook-ceph-dashboard-link\", \"kind\": \"Secret\", \"data\": {\"userID\": \"ceph-dashboard-link\", \"userKey\": \"<user-key>\"}}, {\"name\": \"rook-csi-rbd-provisioner\", \"kind\": \"Secret\", \"data\": {\"userID\": \"csi-rbd-provisioner\", \"userKey\": \"<user-key>\"}}, {\"name\": \"rook-csi-cephfs-provisioner\", \"kind\": \"Secret\", \"data\": {\"adminID\": \"csi-cephfs-provisioner\", \"adminKey\": \"<admin-key>\"}}, {\"name\": \"rook-csi-cephfs-node\", \"kind\": \"Secret\", \"data\": {\"adminID\": \"csi-cephfs-node\", \"adminKey\": \"<admin-key>\"}}, {\"name\": \"cephfs\", \"kind\": \"StorageClass\", \"data\": {\"fsName\": \"cephfs\", \"pool\": \"cephfs_data\"}}, {\"name\": \"ceph-rgw\", \"kind\": \"StorageClass\", \"data\": {\"endpoint\": \"xxx.xxx.xxx.xxxx\", \"poolPrefix\": \"default\"}}, {\"name\": \"rgw-admin-ops-user\", \"kind\": \"Secret\", \"data\": {\"accessKey\": \"<access-key>\", \"secretKey\": \"<secret-key>\"}}]"
] |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html-single/updating_openshift_data_foundation/index
|
Chapter 7. Probe [monitoring.coreos.com/v1]
|
Chapter 7. Probe [monitoring.coreos.com/v1] Description The Probe custom resource definition (CRD) defines how to scrape metrics from prober exporters such as the [blackbox exporter]( https://github.com/prometheus/blackbox_exporter ). The Probe resource needs 2 pieces of information: * The list of probed addresses which can be defined statically or by discovering Kubernetes Ingress objects. * The prober which exposes the availability of probed endpoints (over various protocols such HTTP, TCP, ICMP, ... ) as Prometheus metrics. Prometheus and PrometheusAgent objects select Probe objects using label and namespace selectors. Type object Required spec 7.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object Specification of desired Ingress selection for target discovery by Prometheus. 7.1.1. .spec Description Specification of desired Ingress selection for target discovery by Prometheus. Type object Property Type Description authorization object Authorization section for this endpoint basicAuth object BasicAuth allow an endpoint to authenticate over basic authentication. More info: https://prometheus.io/docs/operating/configuration/#endpoint bearerTokenSecret object Secret to mount to read bearer token for scraping targets. The secret needs to be in the same namespace as the probe and accessible by the Prometheus Operator. interval string Interval at which targets are probed using the configured prober. If not specified Prometheus' global scrape interval is used. jobName string The job name assigned to scraped metrics by default. keepDroppedTargets integer Per-scrape limit on the number of targets dropped by relabeling that will be kept in memory. 0 means no limit. It requires Prometheus >= v2.47.0. labelLimit integer Per-scrape limit on number of labels that will be accepted for a sample. Only valid in Prometheus versions 2.27.0 and newer. labelNameLengthLimit integer Per-scrape limit on length of labels name that will be accepted for a sample. Only valid in Prometheus versions 2.27.0 and newer. labelValueLengthLimit integer Per-scrape limit on length of labels value that will be accepted for a sample. Only valid in Prometheus versions 2.27.0 and newer. metricRelabelings array MetricRelabelConfigs to apply to samples before ingestion. metricRelabelings[] object RelabelConfig allows dynamic rewriting of the label set for targets, alerts, scraped samples and remote write samples. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config module string The module to use for probing specifying how to probe the target. Example module configuring in the blackbox exporter: https://github.com/prometheus/blackbox_exporter/blob/master/example.yml nativeHistogramBucketLimit integer If there are more than this many buckets in a native histogram, buckets will be merged to stay within the limit. It requires Prometheus >= v2.45.0. nativeHistogramMinBucketFactor integer-or-string If the growth factor of one bucket to the is smaller than this, buckets will be merged to increase the factor sufficiently. It requires Prometheus >= v2.50.0. oauth2 object OAuth2 for the URL. Only valid in Prometheus versions 2.27.0 and newer. prober object Specification for the prober to use for probing targets. The prober.URL parameter is required. Targets cannot be probed if left empty. sampleLimit integer SampleLimit defines per-scrape limit on number of scraped samples that will be accepted. scrapeClass string The scrape class to apply. scrapeClassicHistograms boolean Whether to scrape a classic histogram that is also exposed as a native histogram. It requires Prometheus >= v2.45.0. scrapeProtocols array (string) scrapeProtocols defines the protocols to negotiate during a scrape. It tells clients the protocols supported by Prometheus in order of preference (from most to least preferred). If unset, Prometheus uses its default value. It requires Prometheus >= v2.49.0. scrapeTimeout string Timeout for scraping metrics from the Prometheus exporter. If not specified, the Prometheus global scrape timeout is used. targetLimit integer TargetLimit defines a limit on the number of scraped targets that will be accepted. targets object Targets defines a set of static or dynamically discovered targets to probe. tlsConfig object TLS configuration to use when scraping the endpoint. 7.1.2. .spec.authorization Description Authorization section for this endpoint Type object Property Type Description credentials object Selects a key of a Secret in the namespace that contains the credentials for authentication. type string Defines the authentication type. The value is case-insensitive. "Basic" is not a supported value. Default: "Bearer" 7.1.3. .spec.authorization.credentials Description Selects a key of a Secret in the namespace that contains the credentials for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 7.1.4. .spec.basicAuth Description BasicAuth allow an endpoint to authenticate over basic authentication. More info: https://prometheus.io/docs/operating/configuration/#endpoint Type object Property Type Description password object password specifies a key of a Secret containing the password for authentication. username object username specifies a key of a Secret containing the username for authentication. 7.1.5. .spec.basicAuth.password Description password specifies a key of a Secret containing the password for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 7.1.6. .spec.basicAuth.username Description username specifies a key of a Secret containing the username for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 7.1.7. .spec.bearerTokenSecret Description Secret to mount to read bearer token for scraping targets. The secret needs to be in the same namespace as the probe and accessible by the Prometheus Operator. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 7.1.8. .spec.metricRelabelings Description MetricRelabelConfigs to apply to samples before ingestion. Type array 7.1.9. .spec.metricRelabelings[] Description RelabelConfig allows dynamic rewriting of the label set for targets, alerts, scraped samples and remote write samples. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config Type object Property Type Description action string Action to perform based on the regex matching. Uppercase and Lowercase actions require Prometheus >= v2.36.0. DropEqual and KeepEqual actions require Prometheus >= v2.41.0. Default: "Replace" modulus integer Modulus to take of the hash of the source label values. Only applicable when the action is HashMod . regex string Regular expression against which the extracted value is matched. replacement string Replacement value against which a Replace action is performed if the regular expression matches. Regex capture groups are available. separator string Separator is the string between concatenated SourceLabels. sourceLabels array (string) The source labels select values from existing labels. Their content is concatenated using the configured Separator and matched against the configured regular expression. targetLabel string Label to which the resulting string is written in a replacement. It is mandatory for Replace , HashMod , Lowercase , Uppercase , KeepEqual and DropEqual actions. Regex capture groups are available. 7.1.10. .spec.oauth2 Description OAuth2 for the URL. Only valid in Prometheus versions 2.27.0 and newer. Type object Required clientId clientSecret tokenUrl Property Type Description clientId object clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. clientSecret object clientSecret specifies a key of a Secret containing the OAuth2 client's secret. endpointParams object (string) endpointParams configures the HTTP parameters to append to the token URL. noProxy string noProxy is a comma-separated string that can contain IPs, CIDR notation, domain names that should be excluded from proxying. IP and domain names can contain port numbers. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. proxyConnectHeader object ProxyConnectHeader optionally specifies headers to send to proxies during CONNECT requests. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. proxyConnectHeader{} array proxyConnectHeader{}[] object SecretKeySelector selects a key of a Secret. proxyFromEnvironment boolean Whether to use the proxy configuration defined by environment variables (HTTP_PROXY, HTTPS_PROXY, and NO_PROXY). It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. proxyUrl string proxyURL defines the HTTP proxy server to use. scopes array (string) scopes defines the OAuth2 scopes used for the token request. tlsConfig object TLS configuration to use when connecting to the OAuth2 server. It requires Prometheus >= v2.43.0. tokenUrl string tokenURL configures the URL to fetch the token from. 7.1.11. .spec.oauth2.clientId Description clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 7.1.12. .spec.oauth2.clientId.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 7.1.13. .spec.oauth2.clientId.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 7.1.14. .spec.oauth2.clientSecret Description clientSecret specifies a key of a Secret containing the OAuth2 client's secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 7.1.15. .spec.oauth2.proxyConnectHeader Description ProxyConnectHeader optionally specifies headers to send to proxies during CONNECT requests. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. Type object 7.1.16. .spec.oauth2.proxyConnectHeader{} Description Type array 7.1.17. .spec.oauth2.proxyConnectHeader{}[] Description SecretKeySelector selects a key of a Secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 7.1.18. .spec.oauth2.tlsConfig Description TLS configuration to use when connecting to the OAuth2 server. It requires Prometheus >= v2.43.0. Type object Property Type Description ca object Certificate authority used when verifying server certificates. cert object Client certificate to present when doing client-authentication. insecureSkipVerify boolean Disable target certificate validation. keySecret object Secret containing the client key file for the targets. maxVersion string Maximum acceptable TLS version. It requires Prometheus >= v2.41.0. minVersion string Minimum acceptable TLS version. It requires Prometheus >= v2.35.0. serverName string Used to verify the hostname for the targets. 7.1.19. .spec.oauth2.tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 7.1.20. .spec.oauth2.tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 7.1.21. .spec.oauth2.tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 7.1.22. .spec.oauth2.tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 7.1.23. .spec.oauth2.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 7.1.24. .spec.oauth2.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 7.1.25. .spec.oauth2.tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 7.1.26. .spec.prober Description Specification for the prober to use for probing targets. The prober.URL parameter is required. Targets cannot be probed if left empty. Type object Required url Property Type Description path string Path to collect metrics from. Defaults to /probe . proxyUrl string Optional ProxyURL. scheme string HTTP scheme to use for scraping. http and https are the expected values unless you rewrite the scheme label via relabeling. If empty, Prometheus uses the default value http . url string Mandatory URL of the prober. 7.1.27. .spec.targets Description Targets defines a set of static or dynamically discovered targets to probe. Type object Property Type Description ingress object ingress defines the Ingress objects to probe and the relabeling configuration. If staticConfig is also defined, staticConfig takes precedence. staticConfig object staticConfig defines the static list of targets to probe and the relabeling configuration. If ingress is also defined, staticConfig takes precedence. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#static_config . 7.1.28. .spec.targets.ingress Description ingress defines the Ingress objects to probe and the relabeling configuration. If staticConfig is also defined, staticConfig takes precedence. Type object Property Type Description namespaceSelector object From which namespaces to select Ingress objects. relabelingConfigs array RelabelConfigs to apply to the label set of the target before it gets scraped. The original ingress address is available via the __tmp_prometheus_ingress_address label. It can be used to customize the probed URL. The original scrape job's name is available via the \__tmp_prometheus_job_name label. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config relabelingConfigs[] object RelabelConfig allows dynamic rewriting of the label set for targets, alerts, scraped samples and remote write samples. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config selector object Selector to select the Ingress objects. 7.1.29. .spec.targets.ingress.namespaceSelector Description From which namespaces to select Ingress objects. Type object Property Type Description any boolean Boolean describing whether all namespaces are selected in contrast to a list restricting them. matchNames array (string) List of namespace names to select from. 7.1.30. .spec.targets.ingress.relabelingConfigs Description RelabelConfigs to apply to the label set of the target before it gets scraped. The original ingress address is available via the __tmp_prometheus_ingress_address label. It can be used to customize the probed URL. The original scrape job's name is available via the \__tmp_prometheus_job_name label. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config Type array 7.1.31. .spec.targets.ingress.relabelingConfigs[] Description RelabelConfig allows dynamic rewriting of the label set for targets, alerts, scraped samples and remote write samples. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config Type object Property Type Description action string Action to perform based on the regex matching. Uppercase and Lowercase actions require Prometheus >= v2.36.0. DropEqual and KeepEqual actions require Prometheus >= v2.41.0. Default: "Replace" modulus integer Modulus to take of the hash of the source label values. Only applicable when the action is HashMod . regex string Regular expression against which the extracted value is matched. replacement string Replacement value against which a Replace action is performed if the regular expression matches. Regex capture groups are available. separator string Separator is the string between concatenated SourceLabels. sourceLabels array (string) The source labels select values from existing labels. Their content is concatenated using the configured Separator and matched against the configured regular expression. targetLabel string Label to which the resulting string is written in a replacement. It is mandatory for Replace , HashMod , Lowercase , Uppercase , KeepEqual and DropEqual actions. Regex capture groups are available. 7.1.32. .spec.targets.ingress.selector Description Selector to select the Ingress objects. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 7.1.33. .spec.targets.ingress.selector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 7.1.34. .spec.targets.ingress.selector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 7.1.35. .spec.targets.staticConfig Description staticConfig defines the static list of targets to probe and the relabeling configuration. If ingress is also defined, staticConfig takes precedence. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#static_config . Type object Property Type Description labels object (string) Labels assigned to all metrics scraped from the targets. relabelingConfigs array RelabelConfigs to apply to the label set of the targets before it gets scraped. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config relabelingConfigs[] object RelabelConfig allows dynamic rewriting of the label set for targets, alerts, scraped samples and remote write samples. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config static array (string) The list of hosts to probe. 7.1.36. .spec.targets.staticConfig.relabelingConfigs Description RelabelConfigs to apply to the label set of the targets before it gets scraped. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config Type array 7.1.37. .spec.targets.staticConfig.relabelingConfigs[] Description RelabelConfig allows dynamic rewriting of the label set for targets, alerts, scraped samples and remote write samples. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config Type object Property Type Description action string Action to perform based on the regex matching. Uppercase and Lowercase actions require Prometheus >= v2.36.0. DropEqual and KeepEqual actions require Prometheus >= v2.41.0. Default: "Replace" modulus integer Modulus to take of the hash of the source label values. Only applicable when the action is HashMod . regex string Regular expression against which the extracted value is matched. replacement string Replacement value against which a Replace action is performed if the regular expression matches. Regex capture groups are available. separator string Separator is the string between concatenated SourceLabels. sourceLabels array (string) The source labels select values from existing labels. Their content is concatenated using the configured Separator and matched against the configured regular expression. targetLabel string Label to which the resulting string is written in a replacement. It is mandatory for Replace , HashMod , Lowercase , Uppercase , KeepEqual and DropEqual actions. Regex capture groups are available. 7.1.38. .spec.tlsConfig Description TLS configuration to use when scraping the endpoint. Type object Property Type Description ca object Certificate authority used when verifying server certificates. cert object Client certificate to present when doing client-authentication. insecureSkipVerify boolean Disable target certificate validation. keySecret object Secret containing the client key file for the targets. maxVersion string Maximum acceptable TLS version. It requires Prometheus >= v2.41.0. minVersion string Minimum acceptable TLS version. It requires Prometheus >= v2.35.0. serverName string Used to verify the hostname for the targets. 7.1.39. .spec.tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 7.1.40. .spec.tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 7.1.41. .spec.tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 7.1.42. .spec.tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 7.1.43. .spec.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 7.1.44. .spec.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 7.1.45. .spec.tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 7.2. API endpoints The following API endpoints are available: /apis/monitoring.coreos.com/v1/probes GET : list objects of kind Probe /apis/monitoring.coreos.com/v1/namespaces/{namespace}/probes DELETE : delete collection of Probe GET : list objects of kind Probe POST : create a Probe /apis/monitoring.coreos.com/v1/namespaces/{namespace}/probes/{name} DELETE : delete a Probe GET : read the specified Probe PATCH : partially update the specified Probe PUT : replace the specified Probe 7.2.1. /apis/monitoring.coreos.com/v1/probes HTTP method GET Description list objects of kind Probe Table 7.1. HTTP responses HTTP code Reponse body 200 - OK ProbeList schema 401 - Unauthorized Empty 7.2.2. /apis/monitoring.coreos.com/v1/namespaces/{namespace}/probes HTTP method DELETE Description delete collection of Probe Table 7.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Probe Table 7.3. HTTP responses HTTP code Reponse body 200 - OK ProbeList schema 401 - Unauthorized Empty HTTP method POST Description create a Probe Table 7.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.5. Body parameters Parameter Type Description body Probe schema Table 7.6. HTTP responses HTTP code Reponse body 200 - OK Probe schema 201 - Created Probe schema 202 - Accepted Probe schema 401 - Unauthorized Empty 7.2.3. /apis/monitoring.coreos.com/v1/namespaces/{namespace}/probes/{name} Table 7.7. Global path parameters Parameter Type Description name string name of the Probe HTTP method DELETE Description delete a Probe Table 7.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 7.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Probe Table 7.10. HTTP responses HTTP code Reponse body 200 - OK Probe schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Probe Table 7.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.12. HTTP responses HTTP code Reponse body 200 - OK Probe schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Probe Table 7.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.14. Body parameters Parameter Type Description body Probe schema Table 7.15. HTTP responses HTTP code Reponse body 200 - OK Probe schema 201 - Created Probe schema 401 - Unauthorized Empty
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/monitoring_apis/probe-monitoring-coreos-com-v1
|
probe::tty.read
|
probe::tty.read Name probe::tty.read - called when a tty line will be read Synopsis tty.read Values file_name the file name lreated to the tty driver_name the driver name nr The amount of characters to be read buffer the buffer that will receive the characters
| null |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-tty-read
|
Chapter 19. Managing cloud provider credentials
|
Chapter 19. Managing cloud provider credentials 19.1. About the Cloud Credential Operator The Cloud Credential Operator (CCO) manages cloud provider credentials as custom resource definitions (CRDs). The CCO syncs on CredentialsRequest custom resources (CRs) to allow OpenShift Container Platform components to request cloud provider credentials with the specific permissions that are required for the cluster to run. By setting different values for the credentialsMode parameter in the install-config.yaml file, the CCO can be configured to operate in several different modes. If no mode is specified, or the credentialsMode parameter is set to an empty string ( "" ), the CCO operates in its default mode. 19.1.1. Modes By setting different values for the credentialsMode parameter in the install-config.yaml file, the CCO can be configured to operate in mint , passthrough , or manual mode. These options provide transparency and flexibility in how the CCO uses cloud credentials to process CredentialsRequest CRs in the cluster, and allow the CCO to be configured to suit the security requirements of your organization. Not all CCO modes are supported for all cloud providers. Mint : In mint mode, the CCO uses the provided admin-level cloud credential to create new credentials for components in the cluster with only the specific permissions that are required. Passthrough : In passthrough mode, the CCO passes the provided cloud credential to the components that request cloud credentials. Manual mode with long-term credentials for components : In manual mode, you can manage long-term cloud credentials instead of the CCO. Manual mode with short-term credentials for components : For some providers, you can use the CCO utility ( ccoctl ) during installation to implement short-term credentials for individual components. These credentials are created and managed outside the OpenShift Container Platform cluster. Table 19.1. CCO mode support matrix Cloud provider Mint Passthrough Manual with long-term credentials Manual with short-term credentials Alibaba Cloud X [1] Amazon Web Services (AWS) X X X X Global Microsoft Azure X X X Microsoft Azure Stack Hub X Google Cloud Platform (GCP) X X X X IBM Cloud(R) X [1] Nutanix X [1] Red Hat OpenStack Platform (RHOSP) X VMware vSphere X This platform uses the ccoctl utility during installation to configure long-term credentials. 19.1.2. Determining the Cloud Credential Operator mode For platforms that support using the CCO in multiple modes, you can determine what mode the CCO is configured to use by using the web console or the CLI. Figure 19.1. Determining the CCO configuration 19.1.2.1. Determining the Cloud Credential Operator mode by using the web console You can determine what mode the Cloud Credential Operator (CCO) is configured to use by using the web console. Note Only Amazon Web Services (AWS), global Microsoft Azure, and Google Cloud Platform (GCP) clusters support multiple CCO modes. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator permissions. Procedure Log in to the OpenShift Container Platform web console as a user with the cluster-admin role. Navigate to Administration Cluster Settings . On the Cluster Settings page, select the Configuration tab. Under Configuration resource , select CloudCredential . On the CloudCredential details page, select the YAML tab. In the YAML block, check the value of spec.credentialsMode . The following values are possible, though not all are supported on all platforms: '' : The CCO is operating in the default mode. In this configuration, the CCO operates in mint or passthrough mode, depending on the credentials provided during installation. Mint : The CCO is operating in mint mode. Passthrough : The CCO is operating in passthrough mode. Manual : The CCO is operating in manual mode. Important To determine the specific configuration of an AWS, GCP, or global Microsoft Azure cluster that has a spec.credentialsMode of '' , Mint , or Manual , you must investigate further. AWS and GCP clusters support using mint mode with the root secret deleted. An AWS, GCP, or global Microsoft Azure cluster that uses manual mode might be configured to create and manage cloud credentials from outside of the cluster with AWS STS, GCP Workload Identity, or Microsoft Entra Workload ID. You can determine whether your cluster uses this strategy by examining the cluster Authentication object. AWS or GCP clusters that use the default ( '' ) only: To determine whether the cluster is operating in mint or passthrough mode, inspect the annotations on the cluster root secret: Navigate to Workloads Secrets and look for the root secret for your cloud provider. Note Ensure that the Project dropdown is set to All Projects . Platform Secret name AWS aws-creds GCP gcp-credentials To view the CCO mode that the cluster is using, click 1 annotation under Annotations , and check the value field. The following values are possible: Mint : The CCO is operating in mint mode. Passthrough : The CCO is operating in passthrough mode. If your cluster uses mint mode, you can also determine whether the cluster is operating without the root secret. AWS or GCP clusters that use mint mode only: To determine whether the cluster is operating without the root secret, navigate to Workloads Secrets and look for the root secret for your cloud provider. Note Ensure that the Project dropdown is set to All Projects . Platform Secret name AWS aws-creds GCP gcp-credentials If you see one of these values, your cluster is using mint or passthrough mode with the root secret present. If you do not see these values, your cluster is using the CCO in mint mode with the root secret removed. AWS, GCP, or global Microsoft Azure clusters that use manual mode only: To determine whether the cluster is configured to create and manage cloud credentials from outside of the cluster, you must check the cluster Authentication object YAML values. Navigate to Administration Cluster Settings . On the Cluster Settings page, select the Configuration tab. Under Configuration resource , select Authentication . On the Authentication details page, select the YAML tab. In the YAML block, check the value of the .spec.serviceAccountIssuer parameter. A value that contains a URL that is associated with your cloud provider indicates that the CCO is using manual mode with short-term credentials for components. These clusters are configured using the ccoctl utility to create and manage cloud credentials from outside of the cluster. An empty value ( '' ) indicates that the cluster is using the CCO in manual mode but was not configured using the ccoctl utility. 19.1.2.2. Determining the Cloud Credential Operator mode by using the CLI You can determine what mode the Cloud Credential Operator (CCO) is configured to use by using the CLI. Note Only Amazon Web Services (AWS), global Microsoft Azure, and Google Cloud Platform (GCP) clusters support multiple CCO modes. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator permissions. You have installed the OpenShift CLI ( oc ). Procedure Log in to oc on the cluster as a user with the cluster-admin role. To determine the mode that the CCO is configured to use, enter the following command: USD oc get cloudcredentials cluster \ -o=jsonpath={.spec.credentialsMode} The following output values are possible, though not all are supported on all platforms: '' : The CCO is operating in the default mode. In this configuration, the CCO operates in mint or passthrough mode, depending on the credentials provided during installation. Mint : The CCO is operating in mint mode. Passthrough : The CCO is operating in passthrough mode. Manual : The CCO is operating in manual mode. Important To determine the specific configuration of an AWS, GCP, or global Microsoft Azure cluster that has a spec.credentialsMode of '' , Mint , or Manual , you must investigate further. AWS and GCP clusters support using mint mode with the root secret deleted. An AWS, GCP, or global Microsoft Azure cluster that uses manual mode might be configured to create and manage cloud credentials from outside of the cluster with AWS STS, GCP Workload Identity, or Microsoft Entra Workload ID. You can determine whether your cluster uses this strategy by examining the cluster Authentication object. AWS or GCP clusters that use the default ( '' ) only: To determine whether the cluster is operating in mint or passthrough mode, run the following command: USD oc get secret <secret_name> \ -n kube-system \ -o jsonpath \ --template '{ .metadata.annotations }' where <secret_name> is aws-creds for AWS or gcp-credentials for GCP. This command displays the value of the .metadata.annotations parameter in the cluster root secret object. The following output values are possible: Mint : The CCO is operating in mint mode. Passthrough : The CCO is operating in passthrough mode. If your cluster uses mint mode, you can also determine whether the cluster is operating without the root secret. AWS or GCP clusters that use mint mode only: To determine whether the cluster is operating without the root secret, run the following command: USD oc get secret <secret_name> \ -n=kube-system where <secret_name> is aws-creds for AWS or gcp-credentials for GCP. If the root secret is present, the output of this command returns information about the secret. An error indicates that the root secret is not present on the cluster. AWS, GCP, or global Microsoft Azure clusters that use manual mode only: To determine whether the cluster is configured to create and manage cloud credentials from outside of the cluster, run the following command: USD oc get authentication cluster \ -o jsonpath \ --template='{ .spec.serviceAccountIssuer }' This command displays the value of the .spec.serviceAccountIssuer parameter in the cluster Authentication object. An output of a URL that is associated with your cloud provider indicates that the CCO is using manual mode with short-term credentials for components. These clusters are configured using the ccoctl utility to create and manage cloud credentials from outside of the cluster. An empty output indicates that the cluster is using the CCO in manual mode but was not configured using the ccoctl utility. 19.1.3. Default behavior For platforms on which multiple modes are supported (AWS, Azure, and GCP), when the CCO operates in its default mode, it checks the provided credentials dynamically to determine for which mode they are sufficient to process CredentialsRequest CRs. By default, the CCO determines whether the credentials are sufficient for mint mode, which is the preferred mode of operation, and uses those credentials to create appropriate credentials for components in the cluster. If the credentials are not sufficient for mint mode, it determines whether they are sufficient for passthrough mode. If the credentials are not sufficient for passthrough mode, the CCO cannot adequately process CredentialsRequest CRs. If the provided credentials are determined to be insufficient during installation, the installation fails. For AWS, the installation program fails early in the process and indicates which required permissions are missing. Other providers might not provide specific information about the cause of the error until errors are encountered. If the credentials are changed after a successful installation and the CCO determines that the new credentials are insufficient, the CCO puts conditions on any new CredentialsRequest CRs to indicate that it cannot process them because of the insufficient credentials. To resolve insufficient credentials issues, provide a credential with sufficient permissions. If an error occurred during installation, try installing again. For issues with new CredentialsRequest CRs, wait for the CCO to try to process the CR again. As an alternative, you can configure your cluster to use a different CCO mode that is supported for your cloud provider. 19.1.4. Additional resources Cluster Operators reference page for the Cloud Credential Operator 19.2. The Cloud Credential Operator in mint mode Mint mode is the default Cloud Credential Operator (CCO) credentials mode for OpenShift Container Platform on platforms that support it. Mint mode supports Amazon Web Services (AWS) and Google Cloud Platform (GCP) clusters. 19.2.1. Mint mode credentials management For clusters that use the CCO in mint mode, the administrator-level credential is stored in the kube-system namespace. The CCO uses the admin credential to process the CredentialsRequest objects in the cluster and create users for components with limited permissions. With mint mode, each cluster component has only the specific permissions it requires. Cloud credential reconciliation is automatic and continuous so that components can perform actions that require additional credentials or permissions. For example, a minor version cluster update (such as updating from OpenShift Container Platform 4.16 to 4.17) might include an updated CredentialsRequest resource for a cluster component. The CCO, operating in mint mode, uses the admin credential to process the CredentialsRequest resource and create users with limited permissions to satisfy the updated authentication requirements. Note By default, mint mode requires storing the admin credential in the cluster kube-system namespace. If this approach does not meet the security requirements of your organization, you can remove the credential after installing the cluster . 19.2.1.1. Mint mode permissions requirements When using the CCO in mint mode, ensure that the credential you provide meets the requirements of the cloud on which you are running or installing OpenShift Container Platform. If the provided credentials are not sufficient for mint mode, the CCO cannot create an IAM user. The credential you provide for mint mode in Amazon Web Services (AWS) must have the following permissions: Example 19.1. Required AWS permissions iam:CreateAccessKey iam:CreateUser iam:DeleteAccessKey iam:DeleteUser iam:DeleteUserPolicy iam:GetUser iam:GetUserPolicy iam:ListAccessKeys iam:PutUserPolicy iam:TagUser iam:SimulatePrincipalPolicy The credential you provide for mint mode in Google Cloud Platform (GCP) must have the following permissions: Example 19.2. Required GCP permissions resourcemanager.projects.get serviceusage.services.list iam.serviceAccountKeys.create iam.serviceAccountKeys.delete iam.serviceAccounts.create iam.serviceAccounts.delete iam.serviceAccounts.get iam.roles.get resourcemanager.projects.getIamPolicy resourcemanager.projects.setIamPolicy 19.2.1.2. Admin credentials root secret format Each cloud provider uses a credentials root secret in the kube-system namespace by convention, which is then used to satisfy all credentials requests and create their respective secrets. This is done either by minting new credentials with mint mode , or by copying the credentials root secret with passthrough mode . The format for the secret varies by cloud, and is also used for each CredentialsRequest secret. Amazon Web Services (AWS) secret format apiVersion: v1 kind: Secret metadata: namespace: kube-system name: aws-creds stringData: aws_access_key_id: <base64-encoded_access_key_id> aws_secret_access_key: <base64-encoded_secret_access_key> Google Cloud Platform (GCP) secret format apiVersion: v1 kind: Secret metadata: namespace: kube-system name: gcp-credentials stringData: service_account.json: <base64-encoded_service_account> 19.2.2. Maintaining cloud provider credentials If your cloud provider credentials are changed for any reason, you must manually update the secret that the Cloud Credential Operator (CCO) uses to manage cloud provider credentials. The process for rotating cloud credentials depends on the mode that the CCO is configured to use. After you rotate credentials for a cluster that is using mint mode, you must manually remove the component credentials that were created by the removed credential. Prerequisites Your cluster is installed on a platform that supports rotating cloud credentials manually with the CCO mode that you are using: For mint mode, Amazon Web Services (AWS) and Google Cloud Platform (GCP) are supported. You have changed the credentials that are used to interface with your cloud provider. The new credentials have sufficient permissions for the mode CCO is configured to use in your cluster. Procedure In the Administrator perspective of the web console, navigate to Workloads Secrets . In the table on the Secrets page, find the root secret for your cloud provider. Platform Secret name AWS aws-creds GCP gcp-credentials Click the Options menu in the same row as the secret and select Edit Secret . Record the contents of the Value field or fields. You can use this information to verify that the value is different after updating the credentials. Update the text in the Value field or fields with the new authentication information for your cloud provider, and then click Save . Delete each component secret that is referenced by the individual CredentialsRequest objects. Log in to the OpenShift Container Platform CLI as a user with the cluster-admin role. Get the names and namespaces of all referenced component secrets: USD oc -n openshift-cloud-credential-operator get CredentialsRequest \ -o json | jq -r '.items[] | select (.spec.providerSpec.kind=="<provider_spec>") | .spec.secretRef' where <provider_spec> is the corresponding value for your cloud provider: AWS: AWSProviderSpec GCP: GCPProviderSpec Partial example output for AWS { "name": "ebs-cloud-credentials", "namespace": "openshift-cluster-csi-drivers" } { "name": "cloud-credential-operator-iam-ro-creds", "namespace": "openshift-cloud-credential-operator" } Delete each of the referenced component secrets: USD oc delete secret <secret_name> \ 1 -n <secret_namespace> 2 1 Specify the name of a secret. 2 Specify the namespace that contains the secret. Example deletion of an AWS secret USD oc delete secret ebs-cloud-credentials -n openshift-cluster-csi-drivers You do not need to manually delete the credentials from your provider console. Deleting the referenced component secrets will cause the CCO to delete the existing credentials from the platform and create new ones. Verification To verify that the credentials have changed: In the Administrator perspective of the web console, navigate to Workloads Secrets . Verify that the contents of the Value field or fields have changed. 19.2.3. Additional resources Removing cloud provider credentials 19.3. The Cloud Credential Operator in passthrough mode Passthrough mode is supported for Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), Red Hat OpenStack Platform (RHOSP), and VMware vSphere. In passthrough mode, the Cloud Credential Operator (CCO) passes the provided cloud credential to the components that request cloud credentials. The credential must have permissions to perform the installation and complete the operations that are required by components in the cluster, but does not need to be able to create new credentials. The CCO does not attempt to create additional limited-scoped credentials in passthrough mode. Note Manual mode is the only supported CCO configuration for Microsoft Azure Stack Hub. 19.3.1. Passthrough mode permissions requirements When using the CCO in passthrough mode, ensure that the credential you provide meets the requirements of the cloud on which you are running or installing OpenShift Container Platform. If the provided credentials the CCO passes to a component that creates a CredentialsRequest CR are not sufficient, that component will report an error when it tries to call an API that it does not have permissions for. 19.3.1.1. Amazon Web Services (AWS) permissions The credential you provide for passthrough mode in AWS must have all the requested permissions for all CredentialsRequest CRs that are required by the version of OpenShift Container Platform you are running or installing. To locate the CredentialsRequest CRs that are required, see Manually creating long-term credentials for AWS . 19.3.1.2. Microsoft Azure permissions The credential you provide for passthrough mode in Azure must have all the requested permissions for all CredentialsRequest CRs that are required by the version of OpenShift Container Platform you are running or installing. To locate the CredentialsRequest CRs that are required, see Manually creating long-term credentials for Azure . 19.3.1.3. Google Cloud Platform (GCP) permissions The credential you provide for passthrough mode in GCP must have all the requested permissions for all CredentialsRequest CRs that are required by the version of OpenShift Container Platform you are running or installing. To locate the CredentialsRequest CRs that are required, see Manually creating long-term credentials for GCP . 19.3.1.4. Red Hat OpenStack Platform (RHOSP) permissions To install an OpenShift Container Platform cluster on RHOSP, the CCO requires a credential with the permissions of a member user role. 19.3.1.5. VMware vSphere permissions To install an OpenShift Container Platform cluster on VMware vSphere, the CCO requires a credential with the following vSphere privileges: Table 19.2. Required vSphere privileges Category Privileges Datastore Allocate space Folder Create folder , Delete folder vSphere Tagging All privileges Network Assign network Resource Assign virtual machine to resource pool Profile-driven storage All privileges vApp All privileges Virtual machine All privileges 19.3.2. Admin credentials root secret format Each cloud provider uses a credentials root secret in the kube-system namespace by convention, which is then used to satisfy all credentials requests and create their respective secrets. This is done either by minting new credentials with mint mode , or by copying the credentials root secret with passthrough mode . The format for the secret varies by cloud, and is also used for each CredentialsRequest secret. Amazon Web Services (AWS) secret format apiVersion: v1 kind: Secret metadata: namespace: kube-system name: aws-creds stringData: aws_access_key_id: <base64-encoded_access_key_id> aws_secret_access_key: <base64-encoded_secret_access_key> Microsoft Azure secret format apiVersion: v1 kind: Secret metadata: namespace: kube-system name: azure-credentials stringData: azure_subscription_id: <base64-encoded_subscription_id> azure_client_id: <base64-encoded_client_id> azure_client_secret: <base64-encoded_client_secret> azure_tenant_id: <base64-encoded_tenant_id> azure_resource_prefix: <base64-encoded_resource_prefix> azure_resourcegroup: <base64-encoded_resource_group> azure_region: <base64-encoded_region> On Microsoft Azure, the credentials secret format includes two properties that must contain the cluster's infrastructure ID, generated randomly for each cluster installation. This value can be found after running create manifests: USD cat .openshift_install_state.json | jq '."*installconfig.ClusterID".InfraID' -r Example output mycluster-2mpcn This value would be used in the secret data as follows: azure_resource_prefix: mycluster-2mpcn azure_resourcegroup: mycluster-2mpcn-rg Google Cloud Platform (GCP) secret format apiVersion: v1 kind: Secret metadata: namespace: kube-system name: gcp-credentials stringData: service_account.json: <base64-encoded_service_account> Red Hat OpenStack Platform (RHOSP) secret format apiVersion: v1 kind: Secret metadata: namespace: kube-system name: openstack-credentials data: clouds.yaml: <base64-encoded_cloud_creds> clouds.conf: <base64-encoded_cloud_creds_init> VMware vSphere secret format apiVersion: v1 kind: Secret metadata: namespace: kube-system name: vsphere-creds data: vsphere.openshift.example.com.username: <base64-encoded_username> vsphere.openshift.example.com.password: <base64-encoded_password> 19.3.3. Passthrough mode credential maintenance If CredentialsRequest CRs change over time as the cluster is upgraded, you must manually update the passthrough mode credential to meet the requirements. To avoid credentials issues during an upgrade, check the CredentialsRequest CRs in the release image for the new version of OpenShift Container Platform before upgrading. To locate the CredentialsRequest CRs that are required for your cloud provider, see Manually creating long-term credentials for AWS , Azure , or GCP . 19.3.3.1. Maintaining cloud provider credentials If your cloud provider credentials are changed for any reason, you must manually update the secret that the Cloud Credential Operator (CCO) uses to manage cloud provider credentials. The process for rotating cloud credentials depends on the mode that the CCO is configured to use. After you rotate credentials for a cluster that is using mint mode, you must manually remove the component credentials that were created by the removed credential. Prerequisites Your cluster is installed on a platform that supports rotating cloud credentials manually with the CCO mode that you are using: For passthrough mode, Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), Red Hat OpenStack Platform (RHOSP), and VMware vSphere are supported. You have changed the credentials that are used to interface with your cloud provider. The new credentials have sufficient permissions for the mode CCO is configured to use in your cluster. Procedure In the Administrator perspective of the web console, navigate to Workloads Secrets . In the table on the Secrets page, find the root secret for your cloud provider. Platform Secret name AWS aws-creds Azure azure-credentials GCP gcp-credentials RHOSP openstack-credentials VMware vSphere vsphere-creds Click the Options menu in the same row as the secret and select Edit Secret . Record the contents of the Value field or fields. You can use this information to verify that the value is different after updating the credentials. Update the text in the Value field or fields with the new authentication information for your cloud provider, and then click Save . If you are updating the credentials for a vSphere cluster that does not have the vSphere CSI Driver Operator enabled, you must force a rollout of the Kubernetes controller manager to apply the updated credentials. Note If the vSphere CSI Driver Operator is enabled, this step is not required. To apply the updated vSphere credentials, log in to the OpenShift Container Platform CLI as a user with the cluster-admin role and run the following command: USD oc patch kubecontrollermanager cluster \ -p='{"spec": {"forceRedeploymentReason": "recovery-'"USD( date )"'"}}' \ --type=merge While the credentials are rolling out, the status of the Kubernetes Controller Manager Operator reports Progressing=true . To view the status, run the following command: USD oc get co kube-controller-manager Verification To verify that the credentials have changed: In the Administrator perspective of the web console, navigate to Workloads Secrets . Verify that the contents of the Value field or fields have changed. Additional resources vSphere CSI Driver Operator 19.3.4. Reducing permissions after installation When using passthrough mode, each component has the same permissions used by all other components. If you do not reduce the permissions after installing, all components have the broad permissions that are required to run the installer. After installation, you can reduce the permissions on your credential to only those that are required to run the cluster, as defined by the CredentialsRequest CRs in the release image for the version of OpenShift Container Platform that you are using. To locate the CredentialsRequest CRs that are required for AWS, Azure, or GCP and learn how to change the permissions the CCO uses, see Manually creating long-term credentials for AWS , Azure , or GCP . 19.3.5. Additional resources Manually creating long-term credentials for AWS Manually creating long-term credentials for Azure Manually creating long-term credentials for GCP 19.4. Manual mode with long-term credentials for components Manual mode is supported for Alibaba Cloud, Amazon Web Services (AWS), global Microsoft Azure, Microsoft Azure Stack Hub, Google Cloud Platform (GCP), IBM Cloud(R), and Nutanix. 19.4.1. User-managed credentials In manual mode, a user manages cloud credentials instead of the Cloud Credential Operator (CCO). To use this mode, you must examine the CredentialsRequest CRs in the release image for the version of OpenShift Container Platform that you are running or installing, create corresponding credentials in the underlying cloud provider, and create Kubernetes Secrets in the correct namespaces to satisfy all CredentialsRequest CRs for the cluster's cloud provider. Some platforms use the CCO utility ( ccoctl ) to facilitate this process during installation and updates. Using manual mode with long-term credentials allows each cluster component to have only the permissions it requires, without storing an administrator-level credential in the cluster. This mode also does not require connectivity to services such as the AWS public IAM endpoint. However, you must manually reconcile permissions with new release images for every upgrade. For information about configuring your cloud provider to use manual mode, see the manual credentials management options for your cloud provider. Note An AWS, global Azure, or GCP cluster that uses manual mode might be configured to use short-term credentials for different components. For more information, see Manual mode with short-term credentials for components . 19.4.2. Additional resources Manually creating RAM resources for Alibaba Cloud Manually creating long-term credentials for AWS Manually creating long-term credentials for Azure Manually creating long-term credentials for GCP Configuring IAM for IBM Cloud(R) Configuring IAM for Nutanix Manual mode with short-term credentials for components Preparing to update a cluster with manually maintained credentials 19.5. Manual mode with short-term credentials for components During installation, you can configure the Cloud Credential Operator (CCO) to operate in manual mode and use the CCO utility ( ccoctl ) to implement short-term security credentials for individual components that are created and managed outside the OpenShift Container Platform cluster. Note This credentials strategy is supported for Amazon Web Services (AWS), Google Cloud Platform (GCP), and global Microsoft Azure only. For AWS and GCP clusters, you must configure your cluster to use this strategy during installation of a new OpenShift Container Platform cluster. You cannot configure an existing AWS or GCP cluster that uses a different credentials strategy to use this feature. If you did not configure your Azure cluster to use Microsoft Entra Workload ID during installation, you can enable this authentication method on an existing cluster . Cloud providers use different terms for their implementation of this authentication method. Table 19.3. Short-term credentials provider terminology Cloud provider Provider nomenclature Amazon Web Services (AWS) AWS Security Token Service (STS) Google Cloud Platform (GCP) GCP Workload Identity Global Microsoft Azure Microsoft Entra Workload ID 19.5.1. AWS Security Token Service In manual mode with STS, the individual OpenShift Container Platform cluster components use the AWS Security Token Service (STS) to assign components IAM roles that provide short-term, limited-privilege security credentials. These credentials are associated with IAM roles that are specific to each component that makes AWS API calls. Additional resources Configuring an AWS cluster to use short-term credentials 19.5.1.1. AWS Security Token Service authentication process The AWS Security Token Service (STS) and the AssumeRole API action allow pods to retrieve access keys that are defined by an IAM role policy. The OpenShift Container Platform cluster includes a Kubernetes service account signing service. This service uses a private key to sign service account JSON web tokens (JWT). A pod that requires a service account token requests one through the pod specification. When the pod is created and assigned to a node, the node retrieves a signed service account from the service account signing service and mounts it onto the pod. Clusters that use STS contain an IAM role ID in their Kubernetes configuration secrets. Workloads assume the identity of this IAM role ID. The signed service account token issued to the workload aligns with the configuration in AWS, which allows AWS STS to grant access keys for the specified IAM role to the workload. AWS STS grants access keys only for requests that include service account tokens that meet the following conditions: The token name and namespace match the service account name and namespace. The token is signed by a key that matches the public key. The public key pair for the service account signing key used by the cluster is stored in an AWS S3 bucket. AWS STS federation validates that the service account token signature aligns with the public key stored in the S3 bucket. 19.5.1.1.1. Authentication flow for AWS STS The following diagram illustrates the authentication flow between AWS and the OpenShift Container Platform cluster when using AWS STS. Token signing is the Kubernetes service account signing service on the OpenShift Container Platform cluster. The Kubernetes service account in the pod is the signed service account token. Figure 19.2. AWS Security Token Service authentication flow Requests for new and refreshed credentials are automated by using an appropriately configured AWS IAM OpenID Connect (OIDC) identity provider combined with AWS IAM roles. Service account tokens that are trusted by AWS IAM are signed by OpenShift Container Platform and can be projected into a pod and used for authentication. 19.5.1.1.2. Token refreshing for AWS STS The signed service account token that a pod uses expires after a period of time. For clusters that use AWS STS, this time period is 3600 seconds, or one hour. The kubelet on the node that the pod is assigned to ensures that the token is refreshed. The kubelet attempts to rotate a token when it is older than 80 percent of its time to live. 19.5.1.1.3. OpenID Connect requirements for AWS STS You can store the public portion of the encryption keys for your OIDC configuration in a public or private S3 bucket. The OIDC spec requires the use of HTTPS. AWS services require a public endpoint to expose the OIDC documents in the form of JSON web key set (JWKS) public keys. This allows AWS services to validate the bound tokens signed by Kubernetes and determine whether to trust certificates. As a result, both S3 bucket options require a public HTTPS endpoint and private endpoints are not supported. To use AWS STS, the public AWS backbone for the AWS STS service must be able to communicate with a public S3 bucket or a private S3 bucket with a public CloudFront endpoint. You can choose which type of bucket to use when you process CredentialsRequest objects during installation: By default, the CCO utility ( ccoctl ) stores the OIDC configuration files in a public S3 bucket and uses the S3 URL as the public OIDC endpoint. As an alternative, you can have the ccoctl utility store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL. 19.5.1.2. AWS component secret formats Using manual mode with the AWS Security Token Service (STS) changes the content of the AWS credentials that are provided to individual OpenShift Container Platform components. Compare the following secret formats: AWS secret format using long-term credentials apiVersion: v1 kind: Secret metadata: namespace: <target_namespace> 1 name: <target_secret_name> 2 data: aws_access_key_id: <base64_encoded_access_key_id> aws_secret_access_key: <base64_encoded_secret_access_key> 1 The namespace for the component. 2 The name of the component secret. AWS secret format using AWS STS apiVersion: v1 kind: Secret metadata: namespace: <target_namespace> 1 name: <target_secret_name> 2 stringData: credentials: |- [default] sts_regional_endpoints = regional role_name: <operator_role_name> 3 web_identity_token_file: <path_to_token> 4 1 The namespace for the component. 2 The name of the component secret. 3 The IAM role for the component. 4 The path to the service account token inside the pod. By convention, this is /var/run/secrets/openshift/serviceaccount/token for OpenShift Container Platform components. 19.5.1.3. AWS component secret permissions requirements OpenShift Container Platform components require the following permissions. These values are in the CredentialsRequest custom resource (CR) for each component. Note These permissions apply to all resources. Unless specified, there are no request conditions on these permissions. Component Custom resource Required permissions for services Cluster CAPI Operator openshift-cluster-api-aws EC2 ec2:CreateTags ec2:DescribeAvailabilityZones ec2:DescribeDhcpOptions ec2:DescribeImages ec2:DescribeInstances ec2:DescribeInternetGateways ec2:DescribeSecurityGroups ec2:DescribeSubnets ec2:DescribeVpcs ec2:DescribeNetworkInterfaces ec2:DescribeNetworkInterfaceAttribute ec2:ModifyNetworkInterfaceAttribute ec2:RunInstances ec2:TerminateInstances Elastic load balancing elasticloadbalancing:DescribeLoadBalancers elasticloadbalancing:DescribeTargetGroups elasticloadbalancing:DescribeTargetHealth elasticloadbalancing:RegisterInstancesWithLoadBalancer elasticloadbalancing:RegisterTargets elasticloadbalancing:DeregisterTargets Identity and Access Management (IAM) iam:PassRole iam:CreateServiceLinkedRole Key Management Service (KMS) kms:Decrypt kms:Encrypt kms:GenerateDataKey kms:GenerateDataKeyWithoutPlainText kms:DescribeKey kms:RevokeGrant [1] kms:CreateGrant [1] kms:ListGrants [1] Machine API Operator openshift-machine-api-aws EC2 ec2:CreateTags ec2:DescribeAvailabilityZones ec2:DescribeDhcpOptions ec2:DescribeImages ec2:DescribeInstances ec2:DescribeInstanceTypes ec2:DescribeInternetGateways ec2:DescribeSecurityGroups ec2:DescribeRegions ec2:DescribeSubnets ec2:DescribeVpcs ec2:RunInstances ec2:TerminateInstances Elastic load balancing elasticloadbalancing:DescribeLoadBalancers elasticloadbalancing:DescribeTargetGroups elasticloadbalancing:DescribeTargetHealth elasticloadbalancing:RegisterInstancesWithLoadBalancer elasticloadbalancing:RegisterTargets elasticloadbalancing:DeregisterTargets Identity and Access Management (IAM) iam:PassRole iam:CreateServiceLinkedRole Key Management Service (KMS) kms:Decrypt kms:Encrypt kms:GenerateDataKey kms:GenerateDataKeyWithoutPlainText kms:DescribeKey kms:RevokeGrant [1] kms:CreateGrant [1] kms:ListGrants [1] Cloud Credential Operator cloud-credential-operator-iam-ro Identity and Access Management (IAM) iam:GetUser iam:GetUserPolicy iam:ListAccessKeys Cluster Image Registry Operator openshift-image-registry S3 s3:CreateBucket s3:DeleteBucket s3:PutBucketTagging s3:GetBucketTagging s3:PutBucketPublicAccessBlock s3:GetBucketPublicAccessBlock s3:PutEncryptionConfiguration s3:GetEncryptionConfiguration s3:PutLifecycleConfiguration s3:GetLifecycleConfiguration s3:GetBucketLocation s3:ListBucket s3:GetObject s3:PutObject s3:DeleteObject s3:ListBucketMultipartUploads s3:AbortMultipartUpload s3:ListMultipartUploadParts Ingress Operator openshift-ingress Elastic load balancing elasticloadbalancing:DescribeLoadBalancers Route 53 route53:ListHostedZones route53:ListTagsForResources route53:ChangeResourceRecordSets Tag tag:GetResources Security Token Service (STS) sts:AssumeRole Cluster Network Operator openshift-cloud-network-config-controller-aws EC2 ec2:DescribeInstances ec2:DescribeInstanceStatus ec2:DescribeInstanceTypes ec2:UnassignPrivateIpAddresses ec2:AssignPrivateIpAddresses ec2:UnassignIpv6Addresses ec2:AssignIpv6Addresses ec2:DescribeSubnets ec2:DescribeNetworkInterfaces AWS Elastic Block Store CSI Driver Operator aws-ebs-csi-driver-operator EC2 ec2:AttachVolume ec2:CreateSnapshot ec2:CreateTags ec2:CreateVolume ec2:DeleteSnapshot ec2:DeleteTags ec2:DeleteVolume ec2:DescribeInstances ec2:DescribeSnapshots ec2:DescribeTags ec2:DescribeVolumes ec2:DescribeVolumesModifications ec2:DetachVolume ec2:ModifyVolume ec2:DescribeAvailabilityZones ec2:EnableFastSnapshotRestores Key Management Service (KMS) kms:ReEncrypt* kms:Decrypt kms:Encrypt kms:GenerateDataKey kms:GenerateDataKeyWithoutPlainText kms:DescribeKey kms:RevokeGrant [1] kms:CreateGrant [1] kms:ListGrants [1] Request condition: kms:GrantIsForAWSResource: true 19.5.1.4. OLM-managed Operator support for authentication with AWS STS In addition to OpenShift Container Platform cluster components, some Operators managed by the Operator Lifecycle Manager (OLM) on AWS clusters can use manual mode with STS. These Operators authenticate with limited-privilege, short-term credentials that are managed outside the cluster. To determine if an Operator supports authentication with AWS STS, see the Operator description in OperatorHub. Additional resources CCO-based workflow for OLM-managed Operators with AWS STS 19.5.2. GCP Workload Identity In manual mode with GCP Workload Identity, the individual OpenShift Container Platform cluster components use the GCP workload identity provider to allow components to impersonate GCP service accounts using short-term, limited-privilege credentials. Additional resources Configuring a GCP cluster to use short-term credentials 19.5.2.1. GCP Workload Identity authentication process Requests for new and refreshed credentials are automated by using an appropriately configured OpenID Connect (OIDC) identity provider combined with IAM service accounts. Service account tokens that are trusted by GCP are signed by OpenShift Container Platform and can be projected into a pod and used for authentication. Tokens are refreshed after one hour. The following diagram details the authentication flow between GCP and the OpenShift Container Platform cluster when using GCP Workload Identity. Figure 19.3. GCP Workload Identity authentication flow 19.5.2.2. GCP component secret formats Using manual mode with GCP Workload Identity changes the content of the GCP credentials that are provided to individual OpenShift Container Platform components. Compare the following secret content: GCP secret format apiVersion: v1 kind: Secret metadata: namespace: <target_namespace> 1 name: <target_secret_name> 2 data: service_account.json: <service_account> 3 1 The namespace for the component. 2 The name of the component secret. 3 The Base64 encoded service account. Content of the Base64 encoded service_account.json file using long-term credentials { "type": "service_account", 1 "project_id": "<project_id>", "private_key_id": "<private_key_id>", "private_key": "<private_key>", 2 "client_email": "<client_email_address>", "client_id": "<client_id>", "auth_uri": "https://accounts.google.com/o/oauth2/auth", "token_uri": "https://oauth2.googleapis.com/token", "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs", "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/<client_email_address>" } 1 The credential type is service_account . 2 The private RSA key that is used to authenticate to GCP. This key must be kept secure and is not rotated. Content of the Base64 encoded service_account.json file using GCP Workload Identity { "type": "external_account", 1 "audience": "//iam.googleapis.com/projects/123456789/locations/global/workloadIdentityPools/test-pool/providers/test-provider", 2 "subject_token_type": "urn:ietf:params:oauth:token-type:jwt", "token_url": "https://sts.googleapis.com/v1/token", "service_account_impersonation_url": "https://iamcredentials.googleapis.com/v1/projects/-/serviceAccounts/<client_email_address>:generateAccessToken", 3 "credential_source": { "file": "<path_to_token>", 4 "format": { "type": "text" } } } 1 The credential type is external_account . 2 The target audience is the GCP Workload Identity provider. 3 The resource URL of the service account that can be impersonated with these credentials. 4 The path to the service account token inside the pod. By convention, this is /var/run/secrets/openshift/serviceaccount/token for OpenShift Container Platform components. 19.5.3. Microsoft Entra Workload ID In manual mode with Microsoft Entra Workload ID, the individual OpenShift Container Platform cluster components use the Workload ID provider to assign components short-term security credentials. Additional resources Configuring a global Microsoft Azure cluster to use short-term credentials 19.5.3.1. Microsoft Entra Workload ID authentication process The following diagram details the authentication flow between Azure and the OpenShift Container Platform cluster when using Microsoft Entra Workload ID. Figure 19.4. Workload ID authentication flow 19.5.3.2. Azure component secret formats Using manual mode with Microsoft Entra Workload ID changes the content of the Azure credentials that are provided to individual OpenShift Container Platform components. Compare the following secret formats: Azure secret format using long-term credentials apiVersion: v1 kind: Secret metadata: namespace: <target_namespace> 1 name: <target_secret_name> 2 data: azure_client_id: <client_id> 3 azure_client_secret: <client_secret> 4 azure_region: <region> azure_resource_prefix: <resource_group_prefix> 5 azure_resourcegroup: <resource_group_prefix>-rg 6 azure_subscription_id: <subscription_id> azure_tenant_id: <tenant_id> type: Opaque 1 The namespace for the component. 2 The name of the component secret. 3 The client ID of the Microsoft Entra ID identity that the component uses to authenticate. 4 The component secret that is used to authenticate with Microsoft Entra ID for the <client_id> identity. 5 The resource group prefix. 6 The resource group. This value is formed by the <resource_group_prefix> and the suffix -rg . Azure secret format using Microsoft Entra Workload ID apiVersion: v1 kind: Secret metadata: namespace: <target_namespace> 1 name: <target_secret_name> 2 data: azure_client_id: <client_id> 3 azure_federated_token_file: <path_to_token_file> 4 azure_region: <region> azure_subscription_id: <subscription_id> azure_tenant_id: <tenant_id> type: Opaque 1 The namespace for the component. 2 The name of the component secret. 3 The client ID of the user-assigned managed identity that the component uses to authenticate. 4 The path to the mounted service account token file. 19.5.3.3. Azure component secret permissions requirements OpenShift Container Platform components require the following permissions. These values are in the CredentialsRequest custom resource (CR) for each component. Component Custom resource Required permissions for services Cloud Controller Manager Operator openshift-azure-cloud-controller-manager Microsoft.Compute/virtualMachines/read Microsoft.Network/loadBalancers/read Microsoft.Network/loadBalancers/write Microsoft.Network/networkInterfaces/read Microsoft.Network/networkSecurityGroups/read Microsoft.Network/networkSecurityGroups/write Microsoft.Network/publicIPAddresses/join/action Microsoft.Network/publicIPAddresses/read Microsoft.Network/publicIPAddresses/write Cluster CAPI Operator openshift-cluster-api-azure role: Contributor [1] Machine API Operator openshift-machine-api-azure Microsoft.Compute/availabilitySets/delete Microsoft.Compute/availabilitySets/read Microsoft.Compute/availabilitySets/write Microsoft.Compute/diskEncryptionSets/read Microsoft.Compute/disks/delete Microsoft.Compute/galleries/images/versions/read Microsoft.Compute/skus/read Microsoft.Compute/virtualMachines/delete Microsoft.Compute/virtualMachines/extensions/delete Microsoft.Compute/virtualMachines/extensions/read Microsoft.Compute/virtualMachines/extensions/write Microsoft.Compute/virtualMachines/read Microsoft.Compute/virtualMachines/write Microsoft.ManagedIdentity/userAssignedIdentities/assign/action Microsoft.Network/applicationSecurityGroups/read Microsoft.Network/loadBalancers/backendAddressPools/join/action Microsoft.Network/loadBalancers/read Microsoft.Network/loadBalancers/write Microsoft.Network/networkInterfaces/delete Microsoft.Network/networkInterfaces/join/action Microsoft.Network/networkInterfaces/loadBalancers/read Microsoft.Network/networkInterfaces/read Microsoft.Network/networkInterfaces/write Microsoft.Network/networkSecurityGroups/read Microsoft.Network/networkSecurityGroups/write Microsoft.Network/publicIPAddresses/delete Microsoft.Network/publicIPAddresses/join/action Microsoft.Network/publicIPAddresses/read Microsoft.Network/publicIPAddresses/write Microsoft.Network/routeTables/read Microsoft.Network/virtualNetworks/delete Microsoft.Network/virtualNetworks/read Microsoft.Network/virtualNetworks/subnets/join/action Microsoft.Network/virtualNetworks/subnets/read Microsoft.Resources/subscriptions/resourceGroups/read Cluster Image Registry Operator openshift-image-registry-azure Data permissions Microsoft.Storage/storageAccounts/blobServices/containers/blobs/delete Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action Microsoft.Storage/storageAccounts/blobServices/containers/blobs/move/action General permissions Microsoft.Storage/storageAccounts/blobServices/read Microsoft.Storage/storageAccounts/blobServices/containers/read Microsoft.Storage/storageAccounts/blobServices/containers/write Microsoft.Storage/storageAccounts/blobServices/generateUserDelegationKey/action Microsoft.Storage/storageAccounts/read Microsoft.Storage/storageAccounts/write Microsoft.Storage/storageAccounts/delete Microsoft.Storage/storageAccounts/listKeys/action Microsoft.Resources/tags/write Ingress Operator openshift-ingress-azure Microsoft.Network/dnsZones/A/delete Microsoft.Network/dnsZones/A/write Microsoft.Network/privateDnsZones/A/delete Microsoft.Network/privateDnsZones/A/write Cluster Network Operator openshift-cloud-network-config-controller-azure Microsoft.Network/networkInterfaces/read Microsoft.Network/networkInterfaces/write Microsoft.Compute/virtualMachines/read Microsoft.Network/virtualNetworks/read Microsoft.Network/virtualNetworks/subnets/join/action Microsoft.Network/loadBalancers/backendAddressPools/join/action Azure File CSI Driver Operator azure-file-csi-driver-operator Microsoft.Network/networkSecurityGroups/join/action Microsoft.Network/virtualNetworks/subnets/read Microsoft.Network/virtualNetworks/subnets/write Microsoft.Storage/storageAccounts/delete Microsoft.Storage/storageAccounts/fileServices/read Microsoft.Storage/storageAccounts/fileServices/shares/delete Microsoft.Storage/storageAccounts/fileServices/shares/read Microsoft.Storage/storageAccounts/fileServices/shares/write Microsoft.Storage/storageAccounts/listKeys/action Microsoft.Storage/storageAccounts/read Microsoft.Storage/storageAccounts/write Azure Disk CSI Driver Operator azure-disk-csi-driver-operator Microsoft.Compute/disks/* Microsoft.Compute/snapshots/* Microsoft.Compute/virtualMachineScaleSets/*/read Microsoft.Compute/virtualMachineScaleSets/read Microsoft.Compute/virtualMachineScaleSets/virtualMachines/write Microsoft.Compute/virtualMachines/*/read Microsoft.Compute/virtualMachines/write Microsoft.Resources/subscriptions/resourceGroups/read This component requires a role rather than a set of permissions. 19.5.4. Additional resources Configuring an AWS cluster to use short-term credentials Configuring a GCP cluster to use short-term credentials Configuring a global Microsoft Azure cluster to use short-term credentials Preparing to update a cluster with manually maintained credentials
|
[
"oc get cloudcredentials cluster -o=jsonpath={.spec.credentialsMode}",
"oc get secret <secret_name> -n kube-system -o jsonpath --template '{ .metadata.annotations }'",
"oc get secret <secret_name> -n=kube-system",
"oc get authentication cluster -o jsonpath --template='{ .spec.serviceAccountIssuer }'",
"apiVersion: v1 kind: Secret metadata: namespace: kube-system name: aws-creds stringData: aws_access_key_id: <base64-encoded_access_key_id> aws_secret_access_key: <base64-encoded_secret_access_key>",
"apiVersion: v1 kind: Secret metadata: namespace: kube-system name: gcp-credentials stringData: service_account.json: <base64-encoded_service_account>",
"oc -n openshift-cloud-credential-operator get CredentialsRequest -o json | jq -r '.items[] | select (.spec.providerSpec.kind==\"<provider_spec>\") | .spec.secretRef'",
"{ \"name\": \"ebs-cloud-credentials\", \"namespace\": \"openshift-cluster-csi-drivers\" } { \"name\": \"cloud-credential-operator-iam-ro-creds\", \"namespace\": \"openshift-cloud-credential-operator\" }",
"oc delete secret <secret_name> \\ 1 -n <secret_namespace> 2",
"oc delete secret ebs-cloud-credentials -n openshift-cluster-csi-drivers",
"apiVersion: v1 kind: Secret metadata: namespace: kube-system name: aws-creds stringData: aws_access_key_id: <base64-encoded_access_key_id> aws_secret_access_key: <base64-encoded_secret_access_key>",
"apiVersion: v1 kind: Secret metadata: namespace: kube-system name: azure-credentials stringData: azure_subscription_id: <base64-encoded_subscription_id> azure_client_id: <base64-encoded_client_id> azure_client_secret: <base64-encoded_client_secret> azure_tenant_id: <base64-encoded_tenant_id> azure_resource_prefix: <base64-encoded_resource_prefix> azure_resourcegroup: <base64-encoded_resource_group> azure_region: <base64-encoded_region>",
"cat .openshift_install_state.json | jq '.\"*installconfig.ClusterID\".InfraID' -r",
"mycluster-2mpcn",
"azure_resource_prefix: mycluster-2mpcn azure_resourcegroup: mycluster-2mpcn-rg",
"apiVersion: v1 kind: Secret metadata: namespace: kube-system name: gcp-credentials stringData: service_account.json: <base64-encoded_service_account>",
"apiVersion: v1 kind: Secret metadata: namespace: kube-system name: openstack-credentials data: clouds.yaml: <base64-encoded_cloud_creds> clouds.conf: <base64-encoded_cloud_creds_init>",
"apiVersion: v1 kind: Secret metadata: namespace: kube-system name: vsphere-creds data: vsphere.openshift.example.com.username: <base64-encoded_username> vsphere.openshift.example.com.password: <base64-encoded_password>",
"oc patch kubecontrollermanager cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date )\"'\"}}' --type=merge",
"oc get co kube-controller-manager",
"apiVersion: v1 kind: Secret metadata: namespace: <target_namespace> 1 name: <target_secret_name> 2 data: aws_access_key_id: <base64_encoded_access_key_id> aws_secret_access_key: <base64_encoded_secret_access_key>",
"apiVersion: v1 kind: Secret metadata: namespace: <target_namespace> 1 name: <target_secret_name> 2 stringData: credentials: |- [default] sts_regional_endpoints = regional role_name: <operator_role_name> 3 web_identity_token_file: <path_to_token> 4",
"apiVersion: v1 kind: Secret metadata: namespace: <target_namespace> 1 name: <target_secret_name> 2 data: service_account.json: <service_account> 3",
"{ \"type\": \"service_account\", 1 \"project_id\": \"<project_id>\", \"private_key_id\": \"<private_key_id>\", \"private_key\": \"<private_key>\", 2 \"client_email\": \"<client_email_address>\", \"client_id\": \"<client_id>\", \"auth_uri\": \"https://accounts.google.com/o/oauth2/auth\", \"token_uri\": \"https://oauth2.googleapis.com/token\", \"auth_provider_x509_cert_url\": \"https://www.googleapis.com/oauth2/v1/certs\", \"client_x509_cert_url\": \"https://www.googleapis.com/robot/v1/metadata/x509/<client_email_address>\" }",
"{ \"type\": \"external_account\", 1 \"audience\": \"//iam.googleapis.com/projects/123456789/locations/global/workloadIdentityPools/test-pool/providers/test-provider\", 2 \"subject_token_type\": \"urn:ietf:params:oauth:token-type:jwt\", \"token_url\": \"https://sts.googleapis.com/v1/token\", \"service_account_impersonation_url\": \"https://iamcredentials.googleapis.com/v1/projects/-/serviceAccounts/<client_email_address>:generateAccessToken\", 3 \"credential_source\": { \"file\": \"<path_to_token>\", 4 \"format\": { \"type\": \"text\" } } }",
"apiVersion: v1 kind: Secret metadata: namespace: <target_namespace> 1 name: <target_secret_name> 2 data: azure_client_id: <client_id> 3 azure_client_secret: <client_secret> 4 azure_region: <region> azure_resource_prefix: <resource_group_prefix> 5 azure_resourcegroup: <resource_group_prefix>-rg 6 azure_subscription_id: <subscription_id> azure_tenant_id: <tenant_id> type: Opaque",
"apiVersion: v1 kind: Secret metadata: namespace: <target_namespace> 1 name: <target_secret_name> 2 data: azure_client_id: <client_id> 3 azure_federated_token_file: <path_to_token_file> 4 azure_region: <region> azure_subscription_id: <subscription_id> azure_tenant_id: <tenant_id> type: Opaque"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/authentication_and_authorization/managing-cloud-provider-credentials
|
Chapter 4. Configuring Capsule Server with external services
|
Chapter 4. Configuring Capsule Server with external services If you do not want to configure the DNS, DHCP, and TFTP services on Capsule Server, use this section to configure your Capsule Server to work with external DNS, DHCP, and TFTP services. 4.1. Configuring Capsule Server with external DNS You can configure Capsule Server with external DNS. Capsule Server uses the nsupdate utility to update DNS records on the remote server. To make any changes persistent, you must enter the satellite-installer command with the options appropriate for your environment. Prerequisites You must have a configured external DNS server. This guide assumes you have an existing installation. Procedure Copy the /etc/rndc.key file from the external DNS server to Capsule Server: Configure the ownership, permissions, and SELinux context: To test the nsupdate utility, add a host remotely: Enter the satellite-installer command to make the following persistent changes to the /etc/foreman-proxy/settings.d/dns.yml file: In the Satellite web UI, navigate to Infrastructure > Capsules . Locate the Capsule Server and select Refresh from the list in the Actions column. Associate the DNS service with the appropriate subnets and domain. 4.2. Configuring Capsule Server with external DHCP To configure Capsule Server with external DHCP, you must complete the following procedures: Section 4.2.1, "Configuring an external DHCP server to use with Capsule Server" Section 4.2.2, "Configuring Satellite Server with an external DHCP server" 4.2.1. Configuring an external DHCP server to use with Capsule Server To configure an external DHCP server running Red Hat Enterprise Linux to use with Capsule Server, you must install the ISC DHCP Service and Berkeley Internet Name Domain (BIND) utilities packages. You must also share the DHCP configuration and lease files with Capsule Server. The example in this procedure uses the distributed Network File System (NFS) protocol to share the DHCP configuration and lease files. Note If you use dnsmasq as an external DHCP server, enable the dhcp-no-override setting. This is required because Satellite creates configuration files on the TFTP server under the grub2/ subdirectory. If the dhcp-no-override setting is disabled, hosts fetch the bootloader and its configuration from the root directory, which might cause an error. Procedure On your Red Hat Enterprise Linux host, install the ISC DHCP Service and Berkeley Internet Name Domain (BIND) utilities packages: Generate a security token: Edit the dhcpd configuration file for all subnets and add the key generated by tsig-keygen . The following is an example: Note that the option routers value is the IP address of your Satellite Server or Capsule Server that you want to use with an external DHCP service. On Satellite Server, define each subnet. Do not set DHCP Capsule for the defined Subnet yet. To prevent conflicts, set up the lease and reservation ranges separately. For example, if the lease range is 192.168.38.10 to 192.168.38.100, in the Satellite web UI define the reservation range as 192.168.38.101 to 192.168.38.250. Configure the firewall for external access to the DHCP server: Make the changes persistent: On Satellite Server, determine the UID and GID of the foreman user: On the DHCP server, create the foreman user and group with the same IDs as determined in a step: To ensure that the configuration files are accessible, restore the read and execute flags: Enable and start the DHCP service: Export the DHCP configuration and lease files using NFS: Create directories for the DHCP configuration and lease files that you want to export using NFS: To create mount points for the created directories, add the following line to the /etc/fstab file: Mount the file systems in /etc/fstab : Ensure the following lines are present in /etc/exports : Note that the IP address that you enter is the Satellite or Capsule IP address that you want to use with an external DHCP service. Reload the NFS server: Configure the firewall for DHCP omapi port 7911: Optional: Configure the firewall for external access to NFS. Clients are configured using NFSv3. Make the changes persistent: 4.2.2. Configuring Satellite Server with an external DHCP server You can configure Capsule Server with an external DHCP server. Prerequisites Ensure that you have configured an external DHCP server and that you have shared the DHCP configuration and lease files with Capsule Server. For more information, see Section 4.2.1, "Configuring an external DHCP server to use with Capsule Server" . Procedure Install the nfs-utils package: Create the DHCP directories for NFS: Change the file owner: Verify communication with the NFS server and the Remote Procedure Call (RPC) communication paths: Add the following lines to the /etc/fstab file: Mount the file systems on /etc/fstab : To verify that the foreman-proxy user can access the files that are shared over the network, display the DHCP configuration and lease files: Enter the satellite-installer command to make the following persistent changes to the /etc/foreman-proxy/settings.d/dhcp.yml file: Associate the DHCP service with the appropriate subnets and domain. 4.3. Configuring Capsule Server with external TFTP You can configure Capsule Server with external TFTP services. Procedure Create the TFTP directory for NFS: In the /etc/fstab file, add the following line: Mount the file systems in /etc/fstab : Enter the satellite-installer command to make the following persistent changes to the /etc/foreman-proxy/settings.d/tftp.yml file: If the TFTP service is running on a different server than the DHCP service, update the tftp_servername setting with the FQDN or IP address of the server that the TFTP service is running on: In the Satellite web UI, navigate to Infrastructure > Capsules . Locate the Capsule Server and select Refresh from the list in the Actions column. Associate the TFTP service with the appropriate subnets and domain. 4.4. Configuring Capsule Server with external IdM DNS When Satellite Server adds a DNS record for a host, it first determines which Capsule is providing DNS for that domain. It then communicates with the Capsule that is configured to provide DNS service for your deployment and adds the record. The hosts are not involved in this process. Therefore, you must install and configure the IdM client on the Satellite or Capsule that is currently configured to provide a DNS service for the domain you want to manage using the IdM server. Capsule Server can be configured to use a Red Hat Identity Management (IdM) server to provide DNS service. For more information about Red Hat Identity Management, see the Linux Domain Identity, Authentication, and Policy Guide . To configure Capsule Server to use a Red Hat Identity Management (IdM) server to provide DNS service, use one of the following procedures: Section 4.4.1, "Configuring dynamic DNS update with GSS-TSIG authentication" Section 4.4.2, "Configuring dynamic DNS update with TSIG authentication" To revert to internal DNS service, use the following procedure: Section 4.4.3, "Reverting to internal DNS service" Note You are not required to use Capsule Server to manage DNS. When you are using the realm enrollment feature of Satellite, where provisioned hosts are enrolled automatically to IdM, the ipa-client-install script creates DNS records for the client. Configuring Capsule Server with external IdM DNS and realm enrollment are mutually exclusive. For more information about configuring realm enrollment, see External Authentication for Provisioned Hosts in Installing Satellite Server in a connected network environment . 4.4.1. Configuring dynamic DNS update with GSS-TSIG authentication You can configure the IdM server to use the generic security service algorithm for secret key transaction (GSS-TSIG) technology defined in RFC3645 . To configure the IdM server to use the GSS-TSIG technology, you must install the IdM client on the Capsule Server base operating system. Prerequisites You must ensure the IdM server is deployed and the host-based firewall is configured correctly. For more information, see Port Requirements for IdM in the Installing Identity Management Guide . You must contact the IdM server administrator to ensure that you obtain an account on the IdM server with permissions to create zones on the IdM server. You should create a backup of the answer file. You can use the backup to restore the answer file to its original state if it becomes corrupted. For more information, see Configuring Satellite Server . Procedure To configure dynamic DNS update with GSS-TSIG authentication, complete the following steps: Creating a Kerberos principal on the IdM server Obtain a Kerberos ticket for the account obtained from the IdM administrator: Create a new Kerberos principal for Capsule Server to use to authenticate on the IdM server: Installing and configuring the idM client On the base operating system of either the Satellite or Capsule that is managing the DNS service for your deployment, install the ipa-client package: Configure the IdM client by running the installation script and following the on-screen prompts: Obtain a Kerberos ticket: Remove any preexisting keytab : Obtain the keytab for this system: Note When adding a keytab to a standby system with the same host name as the original system in service, add the r option to prevent generating new credentials and rendering the credentials on the original system invalid. For the dns.keytab file, set the group and owner to foreman-proxy : Optional: To verify that the keytab file is valid, enter the following command: Configuring DNS zones in the IdM web UI Create and configure the zone that you want to manage: Navigate to Network Services > DNS > DNS Zones . Select Add and enter the zone name. For example, example.com . Click Add and Edit . Click the Settings tab and in the BIND update policy box, add the following to the semi-colon separated list: Set Dynamic update to True . Enable Allow PTR sync . Click Save to save the changes. Create and configure the reverse zone: Navigate to Network Services > DNS > DNS Zones . Click Add . Select Reverse zone IP network and add the network address in CIDR format to enable reverse lookups. Click Add and Edit . Click the Settings tab and in the BIND update policy box, add the following to the semi-colon separated list: Set Dynamic update to True . Click Save to save the changes. Configuring the Satellite or Capsule Server that manages the DNS service for the domain Configure your Satellite Server or Capsule Server to connect to your DNS service: For each affected Capsule, update the configuration of that Capsule in the Satellite web UI: In the Satellite web UI, navigate to Infrastructure > Capsules , locate the Capsule Server, and from the list in the Actions column, select Refresh . Configure the domain: In the Satellite web UI, navigate to Infrastructure > Domains and select the domain name. In the Domain tab, ensure DNS Capsule is set to the Capsule where the subnet is connected. Configure the subnet: In the Satellite web UI, navigate to Infrastructure > Subnets and select the subnet name. In the Subnet tab, set IPAM to None . In the Domains tab, select the domain that you want to manage using the IdM server. In the Capsules tab, ensure Reverse DNS Capsule is set to the Capsule where the subnet is connected. Click Submit to save the changes. 4.4.2. Configuring dynamic DNS update with TSIG authentication You can configure an IdM server to use the secret key transaction authentication for DNS (TSIG) technology that uses the rndc.key key file for authentication. The TSIG protocol is defined in RFC2845 . Prerequisites You must ensure the IdM server is deployed and the host-based firewall is configured correctly. For more information, see Port Requirements in the Linux Domain Identity, Authentication, and Policy Guide . You must obtain root user access on the IdM server. You must confirm whether Satellite Server or Capsule Server is configured to provide DNS service for your deployment. You must configure DNS, DHCP and TFTP services on the base operating system of either the Satellite or Capsule that is managing the DNS service for your deployment. You must create a backup of the answer file. You can use the backup to restore the answer file to its original state if it becomes corrupted. For more information, see Configuring Satellite Server . Procedure To configure dynamic DNS update with TSIG authentication, complete the following steps: Enabling external updates to the DNS zone in the IdM server On the IdM Server, add the following to the top of the /etc/named.conf file: Reload the named service to make the changes take effect: In the IdM web UI, navigate to Network Services > DNS > DNS Zones and click the name of the zone. In the Settings tab, apply the following changes: Add the following in the BIND update policy box: Set Dynamic update to True . Click Update to save the changes. Copy the /etc/rndc.key file from the IdM server to the base operating system of your Satellite Server. Enter the following command: To set the correct ownership, permissions, and SELinux context for the rndc.key file, enter the following command: Assign the foreman-proxy user to the named group manually. Normally, satellite-installer ensures that the foreman-proxy user belongs to the named UNIX group, however, in this scenario Satellite does not manage users and groups, therefore you need to assign the foreman-proxy user to the named group manually. On Satellite Server, enter the following satellite-installer command to configure Satellite to use the external DNS server: Testing external updates to the DNS zone in the IdM server Ensure that the key in the /etc/rndc.key file on Satellite Server is the same key file that is used on the IdM server: On Satellite Server, create a test DNS entry for a host. For example, host test.example.com with an A record of 192.168.25.20 on the IdM server at 192.168.25.1 . On Satellite Server, test the DNS entry: To view the entry in the IdM web UI, navigate to Network Services > DNS > DNS Zones . Click the name of the zone and search for the host by name. If resolved successfully, remove the test DNS entry: Confirm that the DNS entry was removed: The above nslookup command fails and returns the SERVFAIL error message if the record was successfully deleted. 4.4.3. Reverting to internal DNS service You can revert to using Satellite Server and Capsule Server as your DNS providers. You can use a backup of the answer file that was created before configuring external DNS, or you can create a backup of the answer file. For more information about answer files, see Configuring Satellite Server . Procedure On the Satellite or Capsule Server that you want to configure to manage DNS service for the domain, complete the following steps: Configuring Satellite or Capsule as a DNS server If you have created a backup of the answer file before configuring external DNS, restore the answer file and then enter the satellite-installer command: If you do not have a suitable backup of the answer file, create a backup of the answer file now. To configure Satellite or Capsule as DNS server without using an answer file, enter the following satellite-installer command on Satellite or Capsule: For more information, see Configuring DNS, DHCP, and TFTP on Capsule Server . After you run the satellite-installer command to make any changes to your Capsule configuration, you must update the configuration of each affected Capsule in the Satellite web UI. Updating the configuration in the Satellite web UI In the Satellite web UI, navigate to Infrastructure > Capsules . For each Capsule that you want to update, from the Actions list, select Refresh . Configure the domain: In the Satellite web UI, navigate to Infrastructure > Domains and click the domain name that you want to configure. In the Domain tab, set DNS Capsule to the Capsule where the subnet is connected. Configure the subnet: In the Satellite web UI, navigate to Infrastructure > Subnets and select the subnet name. In the Subnet tab, set IPAM to DHCP or Internal DB . In the Domains tab, select the domain that you want to manage using Satellite or Capsule. In the Capsules tab, set Reverse DNS Capsule to the Capsule where the subnet is connected. Click Submit to save the changes.
|
[
"scp root@ dns.example.com :/etc/rndc.key /etc/foreman-proxy/rndc.key",
"restorecon -v /etc/foreman-proxy/rndc.key chown -v root:foreman-proxy /etc/foreman-proxy/rndc.key chmod -v 640 /etc/foreman-proxy/rndc.key",
"echo -e \"server DNS_IP_Address \\n update add aaa.example.com 3600 IN A Host_IP_Address \\n send\\n\" | nsupdate -k /etc/foreman-proxy/rndc.key nslookup aaa.example.com DNS_IP_Address echo -e \"server DNS_IP_Address \\n update delete aaa.example.com 3600 IN A Host_IP_Address \\n send\\n\" | nsupdate -k /etc/foreman-proxy/rndc.key",
"satellite-installer --foreman-proxy-dns=true --foreman-proxy-dns-managed=false --foreman-proxy-dns-provider=nsupdate --foreman-proxy-dns-server=\" DNS_IP_Address \" --foreman-proxy-keyfile=/etc/foreman-proxy/rndc.key",
"dnf install dhcp-server bind-utils",
"tsig-keygen -a hmac-md5 omapi_key",
"cat /etc/dhcp/dhcpd.conf default-lease-time 604800; max-lease-time 2592000; log-facility local7; subnet 192.168.38.0 netmask 255.255.255.0 { range 192.168.38.10 192.168.38.100 ; option routers 192.168.38.1 ; option subnet-mask 255.255.255.0 ; option domain-search \" virtual.lan \"; option domain-name \" virtual.lan \"; option domain-name-servers 8.8.8.8 ; } omapi-port 7911; key omapi_key { algorithm hmac-md5; secret \" My_Secret \"; }; omapi-key omapi_key;",
"firewall-cmd --add-service dhcp",
"firewall-cmd --runtime-to-permanent",
"id -u foreman 993 id -g foreman 990",
"groupadd -g 990 foreman useradd -u 993 -g 990 -s /sbin/nologin foreman",
"chmod o+rx /etc/dhcp/ chmod o+r /etc/dhcp/dhcpd.conf chattr +i /etc/dhcp/ /etc/dhcp/dhcpd.conf",
"systemctl enable --now dhcpd",
"dnf install nfs-utils systemctl enable --now nfs-server",
"mkdir -p /exports/var/lib/dhcpd /exports/etc/dhcp",
"/var/lib/dhcpd /exports/var/lib/dhcpd none bind,auto 0 0 /etc/dhcp /exports/etc/dhcp none bind,auto 0 0",
"mount -a",
"/exports 192.168.38.1 (rw,async,no_root_squash,fsid=0,no_subtree_check) /exports/etc/dhcp 192.168.38.1 (ro,async,no_root_squash,no_subtree_check,nohide) /exports/var/lib/dhcpd 192.168.38.1 (ro,async,no_root_squash,no_subtree_check,nohide)",
"exportfs -rva",
"firewall-cmd --add-port=7911/tcp",
"firewall-cmd --add-service mountd --add-service nfs --add-service rpc-bind --zone public",
"firewall-cmd --runtime-to-permanent",
"satellite-maintain packages install nfs-utils",
"mkdir -p /mnt/nfs/etc/dhcp /mnt/nfs/var/lib/dhcpd",
"chown -R foreman-proxy /mnt/nfs",
"showmount -e DHCP_Server_FQDN rpcinfo -p DHCP_Server_FQDN",
"DHCP_Server_FQDN :/exports/etc/dhcp /mnt/nfs/etc/dhcp nfs ro,vers=3,auto,nosharecache,context=\"system_u:object_r:dhcp_etc_t:s0\" 0 0 DHCP_Server_FQDN :/exports/var/lib/dhcpd /mnt/nfs/var/lib/dhcpd nfs ro,vers=3,auto,nosharecache,context=\"system_u:object_r:dhcpd_state_t:s0\" 0 0",
"mount -a",
"su foreman-proxy -s /bin/bash cat /mnt/nfs/etc/dhcp/dhcpd.conf cat /mnt/nfs/var/lib/dhcpd/dhcpd.leases exit",
"satellite-installer --enable-foreman-proxy-plugin-dhcp-remote-isc --foreman-proxy-dhcp-provider=remote_isc --foreman-proxy-dhcp-server= My_DHCP_Server_FQDN --foreman-proxy-dhcp=true --foreman-proxy-plugin-dhcp-remote-isc-dhcp-config /mnt/nfs/etc/dhcp/dhcpd.conf --foreman-proxy-plugin-dhcp-remote-isc-dhcp-leases /mnt/nfs/var/lib/dhcpd/dhcpd.leases --foreman-proxy-plugin-dhcp-remote-isc-key-name=omapi_key --foreman-proxy-plugin-dhcp-remote-isc-key-secret= My_Secret --foreman-proxy-plugin-dhcp-remote-isc-omapi-port=7911",
"mkdir -p /mnt/nfs/var/lib/tftpboot",
"TFTP_Server_IP_Address :/exports/var/lib/tftpboot /mnt/nfs/var/lib/tftpboot nfs rw,vers=3,auto,nosharecache,context=\"system_u:object_r:tftpdir_rw_t:s0\" 0 0",
"mount -a",
"satellite-installer --foreman-proxy-tftp-root /mnt/nfs/var/lib/tftpboot --foreman-proxy-tftp=true",
"satellite-installer --foreman-proxy-tftp-servername= TFTP_Server_FQDN",
"kinit idm_user",
"ipa service-add capsule.example.com",
"satellite-maintain packages install ipa-client",
"ipa-client-install",
"kinit admin",
"rm /etc/foreman-proxy/dns.keytab",
"ipa-getkeytab -p capsule/ [email protected] -s idm1.example.com -k /etc/foreman-proxy/dns.keytab",
"chown foreman-proxy:foreman-proxy /etc/foreman-proxy/dns.keytab",
"kinit -kt /etc/foreman-proxy/dns.keytab capsule/ [email protected]",
"grant capsule\\047 [email protected] wildcard * ANY;",
"grant capsule\\047 [email protected] wildcard * ANY;",
"satellite-installer --foreman-proxy-dns-managed=false --foreman-proxy-dns-provider=nsupdate_gss --foreman-proxy-dns-server=\" idm1.example.com \" --foreman-proxy-dns-tsig-keytab=/etc/foreman-proxy/dns.keytab --foreman-proxy-dns-tsig-principal=\"capsule/ [email protected] \" --foreman-proxy-dns=true",
"######################################################################## include \"/etc/rndc.key\"; controls { inet _IdM_Server_IP_Address_ port 953 allow { _Satellite_IP_Address_; } keys { \"rndc-key\"; }; }; ########################################################################",
"systemctl reload named",
"grant \"rndc-key\" zonesub ANY;",
"scp /etc/rndc.key root@ satellite.example.com :/etc/rndc.key",
"restorecon -v /etc/rndc.key chown -v root:named /etc/rndc.key chmod -v 640 /etc/rndc.key",
"usermod -a -G named foreman-proxy",
"satellite-installer --foreman-proxy-dns-managed=false --foreman-proxy-dns-provider=nsupdate --foreman-proxy-dns-server=\" IdM_Server_IP_Address \" --foreman-proxy-dns-ttl=86400 --foreman-proxy-dns=true --foreman-proxy-keyfile=/etc/rndc.key",
"key \"rndc-key\" { algorithm hmac-md5; secret \" secret-key ==\"; };",
"echo -e \"server 192.168.25.1\\n update add test.example.com 3600 IN A 192.168.25.20\\n send\\n\" | nsupdate -k /etc/rndc.key",
"nslookup test.example.com 192.168.25.1 Server: 192.168.25.1 Address: 192.168.25.1#53 Name: test.example.com Address: 192.168.25.20",
"echo -e \"server 192.168.25.1\\n update delete test.example.com 3600 IN A 192.168.25.20\\n send\\n\" | nsupdate -k /etc/rndc.key",
"nslookup test.example.com 192.168.25.1",
"satellite-installer",
"satellite-installer --foreman-proxy-dns-managed=true --foreman-proxy-dns-provider=nsupdate --foreman-proxy-dns-server=\"127.0.0.1\" --foreman-proxy-dns=true"
] |
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/installing_capsule_server/configuring-external-services
|
Chapter 7. Security
|
Chapter 7. Security AMQ JMS has a range of security-related configuration options that can be leveraged according to your application's needs. Basic user credentials such as username and password should be passed directly to the ConnectionFactory when creating the Connection within the application. However, if you are using the no-argument factory method, it is also possible to supply user credentials in the connection URI. For more information, see the Section 5.1, "JMS options" section. Another common security consideration is use of SSL/TLS. The client connects to servers over an SSL/TLS transport when the amqps URI scheme is specified in the connection URI , with various options available to configure behavior. For more information, see the Section 5.3, "SSL/TLS options" section. In concert with the earlier items, it may be desirable to restrict the client to allow use of only particular SASL mechanisms from those that may be offered by a server, rather than selecting from all it supports. For more information, see the Section 5.4, "AMQP options" section. Applications calling getObject() on a received ObjectMessage may wish to restrict the types created during deserialization. Note that message bodies composed using the AMQP type system do not use the ObjectInputStream mechanism and therefore do not require this precaution. For more information, see the the section called "Deserialization policy options" section. 7.1. Enabling OpenSSL support SSL/TLS connections can be configured to use a native OpenSSL implementation for improved performance. To use OpenSSL, the transport.useOpenSSL option must be enabled, and an OpenSSL support library must be available on the classpath. To use the system-installed OpenSSL libraries on Red Hat Enterprise Linux, install the openssl and apr RPM packages and add the following dependency to your POM file: Example: Adding native OpenSSL support <dependency> <groupId>io.netty</groupId> <artifactId>netty-tcnative</artifactId> <version>2.0.39.Final-redhat-00001</version> <classifier>linux-x86_64-fedora</classifier> </dependency> A list of OpenSSL library implementations is available from the Netty project. 7.2. Authenticating using Kerberos The client can be configured to authenticate using Kerberos when used with an appropriately configured server. To enable Kerberos, use the following steps. Configure the client to use the GSSAPI mechanism for SASL authentication using the amqp.saslMechanisms URI option. Set the java.security.auth.login.config system property to the path of a JAAS login configuration file containing appropriate configuration for a Kerberos LoginModule . The login configuration file might look like the following example: The precise configuration used will depend on how you wish the credentials to be established for the connection, and the particular LoginModule in use. For details of the Oracle Krb5LoginModule , see the Oracle Krb5LoginModule class reference . For details of the IBM Java 8 Krb5LoginModule , see the IBM Krb5LoginModule class reference . It is possible to configure a LoginModule to establish the credentials to use for the Kerberos process, such as specifying a principal and whether to use an existing ticket cache or keytab. If, however, the LoginModule configuration does not provide the means to establish all necessary credentials, it may then request and be passed the username and password values from the client Connection object if they were either supplied when creating the Connection using the ConnectionFactory or previously configured via its URI options. Note that Kerberos is supported only for authentication purposes. Use SSL/TLS connections for encryption. The following connection URI options can be used to influence the Kerberos authentication process. sasl.options.configScope The name of the login configuration entry used to authenticate. The default is amqp-jms-client . sasl.options.protocol The protocol value used during the GSSAPI SASL process. The default is amqp . sasl.options.serverName The serverName value used during the GSSAPI SASL process. The default is the server hostname from the connection URI. Similar to the amqp. and transport. options detailed previously, these options must be specified on a per-host basis or as all-host nested options in a failover URI.
|
[
"<dependency> <groupId>io.netty</groupId> <artifactId>netty-tcnative</artifactId> <version>2.0.39.Final-redhat-00001</version> <classifier>linux-x86_64-fedora</classifier> </dependency>",
"amqp://myhost:5672?amqp.saslMechanisms=GSSAPI failover:(amqp://myhost:5672?amqp.saslMechanisms=GSSAPI)",
"-Djava.security.auth.login.config=<login-config-file>",
"amqp-jms-client { com.sun.security.auth.module.Krb5LoginModule required useTicketCache=true; };"
] |
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_the_amq_jms_client/security
|
Making open source more inclusive
|
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
| null |
https://docs.redhat.com/en/documentation/migration_toolkit_for_runtimes/1.2/html/introduction_to_the_migration_toolkit_for_runtimes/making-open-source-more-inclusive
|
Tuning performance of Red Hat Satellite
|
Tuning performance of Red Hat Satellite Red Hat Satellite 6.15 Optimize performance for Satellite Server and Capsule Red Hat Satellite Documentation Team [email protected]
| null |
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/tuning_performance_of_red_hat_satellite/index
|
Logging
|
Logging OpenShift Container Platform 4.12 Configuring and using logging in OpenShift Container Platform Red Hat OpenShift Documentation Team
|
[
"tls.verify_certificate = false tls.verify_hostname = false",
"ERROR vector::cli: Configuration error. error=redefinition of table transforms.audit for key transforms.audit",
"oc get clusterversion/version -o jsonpath='{.spec.clusterID}{\"\\n\"}'",
"Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing.",
"oc adm must-gather --image=USD(oc -n openshift-logging get deployment.apps/cluster-logging-operator -o jsonpath='{.spec.template.spec.containers[?(@.name == \"cluster-logging-operator\")].image}')",
"tar -cvaf must-gather.tar.gz must-gather.local.4157245944708210408",
"oc project openshift-logging",
"oc get clusterlogging instance -o yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging status: 1 collection: logs: fluentdStatus: daemonSet: fluentd 2 nodes: collector-2rhqp: ip-10-0-169-13.ec2.internal collector-6fgjh: ip-10-0-165-244.ec2.internal collector-6l2ff: ip-10-0-128-218.ec2.internal collector-54nx5: ip-10-0-139-30.ec2.internal collector-flpnn: ip-10-0-147-228.ec2.internal collector-n2frh: ip-10-0-157-45.ec2.internal pods: failed: [] notReady: [] ready: - collector-2rhqp - collector-54nx5 - collector-6fgjh - collector-6l2ff - collector-flpnn - collector-n2frh logstore: 3 elasticsearchStatus: - ShardAllocationEnabled: all cluster: activePrimaryShards: 5 activeShards: 5 initializingShards: 0 numDataNodes: 1 numNodes: 1 pendingTasks: 0 relocatingShards: 0 status: green unassignedShards: 0 clusterName: elasticsearch nodeConditions: elasticsearch-cdm-mkkdys93-1: nodeCount: 1 pods: client: failed: notReady: ready: - elasticsearch-cdm-mkkdys93-1-7f7c6-mjm7c data: failed: notReady: ready: - elasticsearch-cdm-mkkdys93-1-7f7c6-mjm7c master: failed: notReady: ready: - elasticsearch-cdm-mkkdys93-1-7f7c6-mjm7c visualization: 4 kibanaStatus: - deployment: kibana pods: failed: [] notReady: [] ready: - kibana-7fb4fd4cc9-f2nls replicaSets: - kibana-7fb4fd4cc9 replicas: 1",
"nodes: - conditions: - lastTransitionTime: 2019-03-15T15:57:22Z message: Disk storage usage for node is 27.5gb (36.74%). Shards will be not be allocated on this node. reason: Disk Watermark Low status: \"True\" type: NodeStorage deploymentName: example-elasticsearch-clientdatamaster-0-1 upgradeStatus: {}",
"nodes: - conditions: - lastTransitionTime: 2019-03-15T16:04:45Z message: Disk storage usage for node is 27.5gb (36.74%). Shards will be relocated from this node. reason: Disk Watermark High status: \"True\" type: NodeStorage deploymentName: cluster-logging-operator upgradeStatus: {}",
"Elasticsearch Status: Shard Allocation Enabled: shard allocation unknown Cluster: Active Primary Shards: 0 Active Shards: 0 Initializing Shards: 0 Num Data Nodes: 0 Num Nodes: 0 Pending Tasks: 0 Relocating Shards: 0 Status: cluster health unknown Unassigned Shards: 0 Cluster Name: elasticsearch Node Conditions: elasticsearch-cdm-mkkdys93-1: Last Transition Time: 2019-06-26T03:37:32Z Message: 0/5 nodes are available: 5 node(s) didn't match node selector. Reason: Unschedulable Status: True Type: Unschedulable elasticsearch-cdm-mkkdys93-2: Node Count: 2 Pods: Client: Failed: Not Ready: elasticsearch-cdm-mkkdys93-1-75dd69dccd-f7f49 elasticsearch-cdm-mkkdys93-2-67c64f5f4c-n58vl Ready: Data: Failed: Not Ready: elasticsearch-cdm-mkkdys93-1-75dd69dccd-f7f49 elasticsearch-cdm-mkkdys93-2-67c64f5f4c-n58vl Ready: Master: Failed: Not Ready: elasticsearch-cdm-mkkdys93-1-75dd69dccd-f7f49 elasticsearch-cdm-mkkdys93-2-67c64f5f4c-n58vl Ready:",
"Node Conditions: elasticsearch-cdm-mkkdys93-1: Last Transition Time: 2019-06-26T03:37:32Z Message: pod has unbound immediate PersistentVolumeClaims (repeated 5 times) Reason: Unschedulable Status: True Type: Unschedulable",
"Status: Collection: Logs: Fluentd Status: Daemon Set: fluentd Nodes: Pods: Failed: Not Ready: Ready:",
"oc project openshift-logging",
"oc describe deployment cluster-logging-operator",
"Name: cluster-logging-operator . Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable Progressing True NewReplicaSetAvailable . Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 62m deployment-controller Scaled up replica set cluster-logging-operator-574b8987df to 1----",
"oc get replicaset",
"NAME DESIRED CURRENT READY AGE cluster-logging-operator-574b8987df 1 1 1 159m elasticsearch-cdm-uhr537yu-1-6869694fb 1 1 1 157m elasticsearch-cdm-uhr537yu-2-857b6d676f 1 1 1 156m elasticsearch-cdm-uhr537yu-3-5b6fdd8cfd 1 1 1 155m kibana-5bd5544f87 1 1 1 157m",
"oc describe replicaset cluster-logging-operator-574b8987df",
"Name: cluster-logging-operator-574b8987df . Replicas: 1 current / 1 desired Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed . Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 66m replicaset-controller Created pod: cluster-logging-operator-574b8987df-qjhqv----",
"oc delete pod --selector logging-infra=collector",
"\"values\":[[\"1630410392689800468\",\"{\\\"kind\\\":\\\"Event\\\",\\\"apiVersion\\\": .... ... ... ... \\\"received_at\\\":\\\"2021-08-31T11:46:32.800278+00:00\\\",\\\"version\\\":\\\"1.7.4 1.6.0\\\"}},\\\"@timestamp\\\":\\\"2021-08-31T11:46:32.799692+00:00\\\",\\\"viaq_index_name\\\":\\\"audit-write\\\",\\\"viaq_msg_id\\\":\\\"MzFjYjJkZjItNjY0MC00YWU4LWIwMTEtNGNmM2E5ZmViMGU4\\\",\\\"log_type\\\":\\\"audit\\\"}\"]]}]}",
"429 Too Many Requests Ingestion rate limit exceeded",
"2023-08-25T16:08:49.301780Z WARN sink{component_kind=\"sink\" component_id=default_loki_infra component_type=loki component_name=default_loki_infra}: vector::sinks::util::retries: Retrying after error. error=Server responded with an error: 429 Too Many Requests internal_log_rate_limit=true",
"2023-08-30 14:52:15 +0000 [warn]: [default_loki_infra] failed to flush the buffer. retry_times=2 next_retry_time=2023-08-30 14:52:19 +0000 chunk=\"604251225bf5378ed1567231a1c03b8b\" error_class=Fluent::Plugin::LokiOutput::LogPostError error=\"429 Too Many Requests Ingestion rate limit exceeded for user infrastructure (limit: 4194304 bytes/sec) while attempting to ingest '4082' lines totaling '7820025' bytes, reduce log volume or contact your Loki administrator to see if the limit can be increased\\n\"",
"level=warn ts=2023-08-30T14:57:34.155592243Z caller=grpc_logging.go:43 duration=1.434942ms method=/logproto.Pusher/Push err=\"rpc error: code = Code(429) desc = entry with timestamp 2023-08-30 14:57:32.012778399 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: ingestion: ingestionBurstSize: 16 1 ingestionRate: 8 2",
"oc -n openshift-logging get pods -l component=elasticsearch",
"export ES_POD_NAME=<elasticsearch_pod_name>",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- health",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=_cat/nodes?v",
"oc -n openshift-logging get pods -l component=elasticsearch",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=_cat/master?v",
"oc logs <elasticsearch_master_pod_name> -c elasticsearch -n openshift-logging",
"oc logs <elasticsearch_node_name> -c elasticsearch -n openshift-logging",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=_cat/recovery?active_only=true",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- health | grep number_of_pending_tasks",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=_cluster/settings?pretty",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=_cluster/settings?pretty -X PUT -d '{\"persistent\": {\"cluster.routing.allocation.enable\":\"all\"}}'",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=_cat/indices?v",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=<elasticsearch_index_name>/_cache/clear?pretty",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=<elasticsearch_index_name>/_settings?pretty -X PUT -d '{\"index.allocation.max_retries\":10}'",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=_search/scroll/_all -X DELETE",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=<elasticsearch_index_name>/_settings?pretty -X PUT -d '{\"index.unassigned.node_left.delayed_timeout\":\"10m\"}'",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=_cat/indices?v",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=<elasticsearch_red_index_name> -X DELETE",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=_nodes/stats?pretty",
"oc -n openshift-logging get pods -l component=elasticsearch",
"export ES_POD_NAME=<elasticsearch_pod_name>",
"oc -n openshift-logging get po -o wide",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=_cluster/health?pretty | grep unassigned_shards",
"for pod in `oc -n openshift-logging get po -l component=elasticsearch -o jsonpath='{.items[*].metadata.name}'`; do echo USDpod; oc -n openshift-logging exec -c elasticsearch USDpod -- df -h /elasticsearch/persistent; done",
"elasticsearch-cdm-kcrsda6l-1-586cc95d4f-h8zq8 Filesystem Size Used Avail Use% Mounted on /dev/nvme1n1 19G 522M 19G 3% /elasticsearch/persistent elasticsearch-cdm-kcrsda6l-2-5b548fc7b-cwwk7 Filesystem Size Used Avail Use% Mounted on /dev/nvme2n1 19G 522M 19G 3% /elasticsearch/persistent elasticsearch-cdm-kcrsda6l-3-5dfc884d99-59tjw Filesystem Size Used Avail Use% Mounted on /dev/nvme3n1 19G 528M 19G 3% /elasticsearch/persistent",
"oc -n openshift-logging get es elasticsearch -o jsonpath='{.spec.redundancyPolicy}'",
"oc -n openshift-logging get cl -o jsonpath='{.items[*].spec.logStore.elasticsearch.redundancyPolicy}'",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- indices",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=<elasticsearch_index_name> -X DELETE",
"oc -n openshift-logging get pods -l component=elasticsearch",
"export ES_POD_NAME=<elasticsearch_pod_name>",
"oc -n openshift-logging get po -o wide",
"for pod in `oc -n openshift-logging get po -l component=elasticsearch -o jsonpath='{.items[*].metadata.name}'`; do echo USDpod; oc -n openshift-logging exec -c elasticsearch USDpod -- df -h /elasticsearch/persistent; done",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=_cluster/health?pretty | grep relocating_shards",
"oc -n openshift-logging get es elasticsearch -o jsonpath='{.spec.redundancyPolicy}'",
"oc -n openshift-logging get cl -o jsonpath='{.items[*].spec.logStore.elasticsearch.redundancyPolicy}'",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- indices",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=<elasticsearch_index_name> -X DELETE",
"oc -n openshift-logging get pods -l component=elasticsearch",
"export ES_POD_NAME=<elasticsearch_pod_name>",
"for pod in `oc -n openshift-logging get po -l component=elasticsearch -o jsonpath='{.items[*].metadata.name}'`; do echo USDpod; oc -n openshift-logging exec -c elasticsearch USDpod -- df -h /elasticsearch/persistent; done",
"elasticsearch-cdm-kcrsda6l-1-586cc95d4f-h8zq8 Filesystem Size Used Avail Use% Mounted on /dev/nvme1n1 19G 522M 19G 3% /elasticsearch/persistent elasticsearch-cdm-kcrsda6l-2-5b548fc7b-cwwk7 Filesystem Size Used Avail Use% Mounted on /dev/nvme2n1 19G 522M 19G 3% /elasticsearch/persistent elasticsearch-cdm-kcrsda6l-3-5dfc884d99-59tjw Filesystem Size Used Avail Use% Mounted on /dev/nvme3n1 19G 528M 19G 3% /elasticsearch/persistent",
"oc -n openshift-logging get es elasticsearch -o jsonpath='{.spec.redundancyPolicy}'",
"oc -n openshift-logging get cl -o jsonpath='{.items[*].spec.logStore.elasticsearch.redundancyPolicy}'",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- indices",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=<elasticsearch_index_name> -X DELETE",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=_all/_settings?pretty -X PUT -d '{\"index.blocks.read_only_allow_delete\": null}'",
"for pod in `oc -n openshift-logging get po -l component=elasticsearch -o jsonpath='{.items[*].metadata.name}'`; do echo USDpod; oc -n openshift-logging exec -c elasticsearch USDpod -- df -h /elasticsearch/persistent; done",
"elasticsearch-cdm-kcrsda6l-1-586cc95d4f-h8zq8 Filesystem Size Used Avail Use% Mounted on /dev/nvme1n1 19G 522M 19G 3% /elasticsearch/persistent elasticsearch-cdm-kcrsda6l-2-5b548fc7b-cwwk7 Filesystem Size Used Avail Use% Mounted on /dev/nvme2n1 19G 522M 19G 3% /elasticsearch/persistent elasticsearch-cdm-kcrsda6l-3-5dfc884d99-59tjw Filesystem Size Used Avail Use% Mounted on /dev/nvme3n1 19G 528M 19G 3% /elasticsearch/persistent",
"oc -n openshift-logging get es elasticsearch -o jsonpath='{.spec.redundancyPolicy}'",
"oc -n openshift-logging get cl -o jsonpath='{.items[*].spec.logStore.elasticsearch.redundancyPolicy}'",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- indices",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=<elasticsearch_index_name> -X DELETE",
"oc project openshift-logging",
"oc get Elasticsearch",
"NAME AGE elasticsearch 5h9m",
"oc get Elasticsearch <Elasticsearch-instance> -o yaml",
"oc get Elasticsearch elasticsearch -n openshift-logging -o yaml",
"status: 1 cluster: 2 activePrimaryShards: 30 activeShards: 60 initializingShards: 0 numDataNodes: 3 numNodes: 3 pendingTasks: 0 relocatingShards: 0 status: green unassignedShards: 0 clusterHealth: \"\" conditions: [] 3 nodes: 4 - deploymentName: elasticsearch-cdm-zjf34ved-1 upgradeStatus: {} - deploymentName: elasticsearch-cdm-zjf34ved-2 upgradeStatus: {} - deploymentName: elasticsearch-cdm-zjf34ved-3 upgradeStatus: {} pods: 5 client: failed: [] notReady: [] ready: - elasticsearch-cdm-zjf34ved-1-6d7fbf844f-sn422 - elasticsearch-cdm-zjf34ved-2-dfbd988bc-qkzjz - elasticsearch-cdm-zjf34ved-3-c8f566f7c-t7zkt data: failed: [] notReady: [] ready: - elasticsearch-cdm-zjf34ved-1-6d7fbf844f-sn422 - elasticsearch-cdm-zjf34ved-2-dfbd988bc-qkzjz - elasticsearch-cdm-zjf34ved-3-c8f566f7c-t7zkt master: failed: [] notReady: [] ready: - elasticsearch-cdm-zjf34ved-1-6d7fbf844f-sn422 - elasticsearch-cdm-zjf34ved-2-dfbd988bc-qkzjz - elasticsearch-cdm-zjf34ved-3-c8f566f7c-t7zkt shardAllocationEnabled: all",
"status: nodes: - conditions: - lastTransitionTime: 2019-03-15T15:57:22Z message: Disk storage usage for node is 27.5gb (36.74%). Shards will be not be allocated on this node. reason: Disk Watermark Low status: \"True\" type: NodeStorage deploymentName: example-elasticsearch-cdm-0-1 upgradeStatus: {}",
"status: nodes: - conditions: - lastTransitionTime: 2019-03-15T16:04:45Z message: Disk storage usage for node is 27.5gb (36.74%). Shards will be relocated from this node. reason: Disk Watermark High status: \"True\" type: NodeStorage deploymentName: example-elasticsearch-cdm-0-1 upgradeStatus: {}",
"status: nodes: - conditions: - lastTransitionTime: 2019-04-10T02:26:24Z message: '0/8 nodes are available: 8 node(s) didn''t match node selector.' reason: Unschedulable status: \"True\" type: Unschedulable",
"status: nodes: - conditions: - last Transition Time: 2019-04-10T05:55:51Z message: pod has unbound immediate PersistentVolumeClaims (repeated 5 times) reason: Unschedulable status: True type: Unschedulable",
"status: clusterHealth: \"\" conditions: - lastTransitionTime: 2019-04-17T20:01:31Z message: Wrong RedundancyPolicy selected. Choose different RedundancyPolicy or add more nodes with data roles reason: Invalid Settings status: \"True\" type: InvalidRedundancy",
"status: clusterHealth: green conditions: - lastTransitionTime: '2019-04-17T20:12:34Z' message: >- Invalid master nodes count. Please ensure there are no more than 3 total nodes with master roles reason: Invalid Settings status: 'True' type: InvalidMasters",
"status: clusterHealth: green conditions: - lastTransitionTime: \"2021-05-07T01:05:13Z\" message: Changing the storage structure for a custom resource is not supported reason: StorageStructureChangeIgnored status: 'True' type: StorageStructureChangeIgnored",
"oc get pods --selector component=elasticsearch -o name",
"pod/elasticsearch-cdm-1godmszn-1-6f8495-vp4lw pod/elasticsearch-cdm-1godmszn-2-5769cf-9ms2n pod/elasticsearch-cdm-1godmszn-3-f66f7d-zqkz7",
"oc exec elasticsearch-cdm-4vjor49p-2-6d4d7db474-q2w7z -- indices",
"Defaulting container name to elasticsearch. Use 'oc describe pod/elasticsearch-cdm-4vjor49p-2-6d4d7db474-q2w7z -n openshift-logging' to see all of the containers in this pod. green open infra-000002 S4QANnf1QP6NgCegfnrnbQ 3 1 119926 0 157 78 green open audit-000001 8_EQx77iQCSTzFOXtxRqFw 3 1 0 0 0 0 green open .security iDjscH7aSUGhIdq0LheLBQ 1 1 5 0 0 0 green open .kibana_-377444158_kubeadmin yBywZ9GfSrKebz5gWBZbjw 3 1 1 0 0 0 green open infra-000001 z6Dpe__ORgiopEpW6Yl44A 3 1 871000 0 874 436 green open app-000001 hIrazQCeSISewG3c2VIvsQ 3 1 2453 0 3 1 green open .kibana_1 JCitcBMSQxKOvIq6iQW6wg 1 1 0 0 0 0 green open .kibana_-1595131456_user1 gIYFIEGRRe-ka0W3okS-mQ 3 1 1 0 0 0",
"oc get pods --selector component=elasticsearch -o name",
"pod/elasticsearch-cdm-1godmszn-1-6f8495-vp4lw pod/elasticsearch-cdm-1godmszn-2-5769cf-9ms2n pod/elasticsearch-cdm-1godmszn-3-f66f7d-zqkz7",
"oc describe pod elasticsearch-cdm-1godmszn-1-6f8495-vp4lw",
". Status: Running . Containers: elasticsearch: Container ID: cri-o://b7d44e0a9ea486e27f47763f5bb4c39dfd2 State: Running Started: Mon, 08 Jun 2020 10:17:56 -0400 Ready: True Restart Count: 0 Readiness: exec [/usr/share/elasticsearch/probe/readiness.sh] delay=10s timeout=30s period=5s #success=1 #failure=3 . proxy: Container ID: cri-o://3f77032abaddbb1652c116278652908dc01860320b8a4e741d06894b2f8f9aa1 State: Running Started: Mon, 08 Jun 2020 10:18:38 -0400 Ready: True Restart Count: 0 . Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True . Events: <none>",
"oc get deployment --selector component=elasticsearch -o name",
"deployment.extensions/elasticsearch-cdm-1gon-1 deployment.extensions/elasticsearch-cdm-1gon-2 deployment.extensions/elasticsearch-cdm-1gon-3",
"oc describe deployment elasticsearch-cdm-1gon-1",
". Containers: elasticsearch: Image: registry.redhat.io/openshift-logging/elasticsearch6-rhel8 Readiness: exec [/usr/share/elasticsearch/probe/readiness.sh] delay=10s timeout=30s period=5s #success=1 #failure=3 . Conditions: Type Status Reason ---- ------ ------ Progressing Unknown DeploymentPaused Available True MinimumReplicasAvailable . Events: <none>",
"oc get replicaSet --selector component=elasticsearch -o name replicaset.extensions/elasticsearch-cdm-1gon-1-6f8495 replicaset.extensions/elasticsearch-cdm-1gon-2-5769cf replicaset.extensions/elasticsearch-cdm-1gon-3-f66f7d",
"oc describe replicaSet elasticsearch-cdm-1gon-1-6f8495",
". Containers: elasticsearch: Image: registry.redhat.io/openshift-logging/elasticsearch6-rhel8@sha256:4265742c7cdd85359140e2d7d703e4311b6497eec7676957f455d6908e7b1c25 Readiness: exec [/usr/share/elasticsearch/probe/readiness.sh] delay=10s timeout=30s period=5s #success=1 #failure=3 . Events: <none>",
"eo_elasticsearch_cr_cluster_management_state{state=\"managed\"} 1 eo_elasticsearch_cr_cluster_management_state{state=\"unmanaged\"} 0",
"eo_elasticsearch_cr_restart_total{reason=\"cert_restart\"} 1 eo_elasticsearch_cr_restart_total{reason=\"rolling_restart\"} 1 eo_elasticsearch_cr_restart_total{reason=\"scheduled_restart\"} 3",
"Total number of Namespaces. es_index_namespaces_total 5",
"es_index_document_count{namespace=\"namespace_1\"} 25 es_index_document_count{namespace=\"namespace_2\"} 10 es_index_document_count{namespace=\"namespace_3\"} 5",
"message\": \"Secret \\\"elasticsearch\\\" fields are either missing or empty: [admin-cert, admin-key, logging-es.crt, logging-es.key]\", \"reason\": \"Missing Required Secrets\",",
"apiVersion: v1 kind: Namespace metadata: name: <name> 1 annotations: openshift.io/node-selector: \"\" labels: openshift.io/cluster-monitoring: \"true\"",
"oc apply -f <filename>.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-logging namespace: openshift-logging 1 spec: targetNamespaces: [ ] 2",
"oc apply -f <filename>.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging 1 spec: channel: stable 2 name: cluster-logging source: redhat-operators 3 sourceNamespace: openshift-marketplace",
"oc apply -f <filename>.yaml",
"oc get csv -n <namespace>",
"NAMESPACE NAME DISPLAY VERSION REPLACES PHASE openshift-logging clusterlogging.5.8.0-202007012112.p0 OpenShift Logging 5.8.0-202007012112.p0 Succeeded",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance 1 namespace: openshift-logging spec: managementState: Managed 2 logStore: type: elasticsearch 3 retentionPolicy: 4 application: maxAge: 1d infra: maxAge: 7d audit: maxAge: 7d elasticsearch: nodeCount: 3 5 storage: storageClassName: <storage_class_name> 6 size: 200G resources: 7 limits: memory: 16Gi requests: memory: 16Gi proxy: 8 resources: limits: memory: 256Mi requests: memory: 256Mi redundancyPolicy: SingleRedundancy visualization: type: kibana 9 kibana: replicas: 1 collection: type: fluentd 10 fluentd: {}",
"oc get deployment",
"NAME READY UP-TO-DATE AVAILABLE AGE cluster-logging-operator 1/1 1 1 18h elasticsearch-cd-x6kdekli-1 1/1 1 1 6m54s elasticsearch-cdm-x6kdekli-1 1/1 1 1 18h elasticsearch-cdm-x6kdekli-2 1/1 1 1 6m49s elasticsearch-cdm-x6kdekli-3 1/1 1 1 6m44s",
"oc get pods -n openshift-logging",
"NAME READY STATUS RESTARTS AGE cluster-logging-operator-66f77ffccb-ppzbg 1/1 Running 0 7m elasticsearch-cdm-ftuhduuw-1-ffc4b9566-q6bhp 2/2 Running 0 2m40s elasticsearch-cdm-ftuhduuw-2-7b4994dbfc-rd2gc 2/2 Running 0 2m36s elasticsearch-cdm-ftuhduuw-3-84b5ff7ff8-gqnm2 2/2 Running 0 2m4s collector-587vb 1/1 Running 0 2m26s collector-7mpb9 1/1 Running 0 2m30s collector-flm6j 1/1 Running 0 2m33s collector-gn4rn 1/1 Running 0 2m26s collector-nlgb6 1/1 Running 0 2m30s collector-snpkt 1/1 Running 0 2m28s kibana-d6d5668c5-rppqm 2/2 Running 0 2m39s",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance 1 namespace: openshift-logging 2 spec: managementState: Managed 3",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: spec: logStore: type: <log_store_type> 1 elasticsearch: 2 nodeCount: <integer> resources: {} storage: {} redundancyPolicy: <redundancy_type> 3 lokistack: 4 name: {}",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: managementState: Managed logStore: type: lokistack lokistack: name: logging-loki",
"oc apply -f <filename>.yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: spec: collection: type: <log_collector_type> 1 resources: {} tolerations: {}",
"oc apply -f <filename>.yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: spec: visualization: type: <visualizer_type> 1 kibana: 2 resources: {} nodeSelector: {} proxy: {} replicas: {} tolerations: {} ocpConsole: 3 logsLimit: {} timeout: {}",
"oc apply -f <filename>.yaml",
"oc adm pod-network join-projects --to=openshift-operators-redhat openshift-logging",
"oc label namespace openshift-operators-redhat project=openshift-operators-redhat",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-monitoring-ingress-operators-redhat spec: ingress: - from: - podSelector: {} - from: - namespaceSelector: matchLabels: project: \"openshift-operators-redhat\" - from: - namespaceSelector: matchLabels: name: \"openshift-monitoring\" - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: ingress podSelector: {} policyTypes: - Ingress",
"oc -n openshift-logging delete subscription <subscription>",
"oc -n openshift-logging delete operatorgroup <operator_group_name>",
"oc delete clusterserviceversion cluster-logging.<version>",
"oc get operatorgroup <operator_group_name> -o yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-logging-f52cn namespace: openshift-logging spec: upgradeStrategy: Default status: namespaces: - \"\"",
"oc get pod -n openshift-logging --selector component=elasticsearch",
"NAME READY STATUS RESTARTS AGE elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk 2/2 Running 0 31m elasticsearch-cdm-1pbrl44l-2-5c6d87589f-gx5hk 2/2 Running 0 30m elasticsearch-cdm-1pbrl44l-3-88df5d47-m45jc 2/2 Running 0 29m",
"oc exec -n openshift-logging -c elasticsearch elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk -- health",
"{ \"cluster_name\" : \"elasticsearch\", \"status\" : \"green\", }",
"oc project openshift-logging",
"oc get cronjob",
"NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE elasticsearch-im-app */15 * * * * False 0 <none> 56s elasticsearch-im-audit */15 * * * * False 0 <none> 56s elasticsearch-im-infra */15 * * * * False 0 <none> 56s",
"oc exec -c elasticsearch <any_es_pod_in_the_cluster> -- indices",
"Tue Jun 30 14:30:54 UTC 2020 health status index uuid pri rep docs.count docs.deleted store.size pri.store.size green open infra-000008 bnBvUFEXTWi92z3zWAzieQ 3 1 222195 0 289 144 green open infra-000004 rtDSzoqsSl6saisSK7Au1Q 3 1 226717 0 297 148 green open infra-000012 RSf_kUwDSR2xEuKRZMPqZQ 3 1 227623 0 295 147 green open .kibana_7 1SJdCqlZTPWlIAaOUd78yg 1 1 4 0 0 0 green open infra-000010 iXwL3bnqTuGEABbUDa6OVw 3 1 248368 0 317 158 green open infra-000009 YN9EsULWSNaxWeeNvOs0RA 3 1 258799 0 337 168 green open infra-000014 YP0U6R7FQ_GVQVQZ6Yh9Ig 3 1 223788 0 292 146 green open infra-000015 JRBbAbEmSMqK5X40df9HbQ 3 1 224371 0 291 145 green open .orphaned.2020.06.30 n_xQC2dWQzConkvQqei3YA 3 1 9 0 0 0 green open infra-000007 llkkAVSzSOmosWTSAJM_hg 3 1 228584 0 296 148 green open infra-000005 d9BoGQdiQASsS3BBFm2iRA 3 1 227987 0 297 148 green open infra-000003 1-goREK1QUKlQPAIVkWVaQ 3 1 226719 0 295 147 green open .security zeT65uOuRTKZMjg_bbUc1g 1 1 5 0 0 0 green open .kibana-377444158_kubeadmin wvMhDwJkR-mRZQO84K0gUQ 3 1 1 0 0 0 green open infra-000006 5H-KBSXGQKiO7hdapDE23g 3 1 226676 0 295 147 green open infra-000001 eH53BQ-bSxSWR5xYZB6lVg 3 1 341800 0 443 220 green open .kibana-6 RVp7TemSSemGJcsSUmuf3A 1 1 4 0 0 0 green open infra-000011 J7XWBauWSTe0jnzX02fU6A 3 1 226100 0 293 146 green open app-000001 axSAFfONQDmKwatkjPXdtw 3 1 103186 0 126 57 green open infra-000016 m9c1iRLtStWSF1GopaRyCg 3 1 13685 0 19 9 green open infra-000002 Hz6WvINtTvKcQzw-ewmbYg 3 1 228994 0 296 148 green open infra-000013 KR9mMFUpQl-jraYtanyIGw 3 1 228166 0 298 148 green open audit-000001 eERqLdLmQOiQDFES1LBATQ 3 1 0 0 0 0",
"oc get kibana kibana -o json",
"[ { \"clusterCondition\": { \"kibana-5fdd766ffd-nb2jj\": [ { \"lastTransitionTime\": \"2020-06-30T14:11:07Z\", \"reason\": \"ContainerCreating\", \"status\": \"True\", \"type\": \"\" }, { \"lastTransitionTime\": \"2020-06-30T14:11:07Z\", \"reason\": \"ContainerCreating\", \"status\": \"True\", \"type\": \"\" } ] }, \"deployment\": \"kibana\", \"pods\": { \"failed\": [], \"notReady\": [] \"ready\": [] }, \"replicaSets\": [ \"kibana-5fdd766ffd\" ], \"replicas\": 1 } ]",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: spec: visualization: type: <visualizer_type> 1 kibana: 2 resources: {} nodeSelector: {} proxy: {} replicas: {} tolerations: {} ocpConsole: 3 logsLimit: {} timeout: {}",
"oc apply -f <filename>.yaml",
"oc logs -f <pod_name> -c <container_name>",
"oc logs ruby-58cd97df55-mww7r",
"oc logs -f ruby-57f7f4855b-znl92 -c ruby",
"oc logs <object_type>/<resource_name> 1",
"oc logs deployment/ruby",
"oc get consoles.operator.openshift.io cluster -o yaml |grep logging-view-plugin || oc patch consoles.operator.openshift.io cluster --type=merge --patch '{ \"spec\": { \"plugins\": [\"logging-view-plugin\"]}}'",
"oc patch clusterlogging instance --type=merge --patch '{ \"metadata\": { \"annotations\": { \"logging.openshift.io/ocp-console-migration-target\": \"lokistack-dev\" }}}' -n openshift-logging",
"clusterlogging.logging.openshift.io/instance patched",
"oc get clusterlogging instance -o=jsonpath='{.metadata.annotations.logging\\.openshift\\.io/ocp-console-migration-target}' -n openshift-logging",
"\"lokistack-dev\"",
"oc auth can-i get pods --subresource log -n <project>",
"yes",
"oc auth can-i get pods --subresource log -n <project>",
"yes",
"{ \"_index\": \"infra-000001\", \"_type\": \"_doc\", \"_id\": \"YmJmYTBlNDkZTRmLTliMGQtMjE3NmFiOGUyOWM3\", \"_version\": 1, \"_score\": null, \"_source\": { \"docker\": { \"container_id\": \"f85fa55bbef7bb783f041066be1e7c267a6b88c4603dfce213e32c1\" }, \"kubernetes\": { \"container_name\": \"registry-server\", \"namespace_name\": \"openshift-marketplace\", \"pod_name\": \"redhat-marketplace-n64gc\", \"container_image\": \"registry.redhat.io/redhat/redhat-marketplace-index:v4.7\", \"container_image_id\": \"registry.redhat.io/redhat/redhat-marketplace-index@sha256:65fc0c45aabb95809e376feb065771ecda9e5e59cc8b3024c4545c168f\", \"pod_id\": \"8f594ea2-c866-4b5c-a1c8-a50756704b2a\", \"host\": \"ip-10-0-182-28.us-east-2.compute.internal\", \"master_url\": \"https://kubernetes.default.svc\", \"namespace_id\": \"3abab127-7669-4eb3-b9ef-44c04ad68d38\", \"namespace_labels\": { \"openshift_io/cluster-monitoring\": \"true\" }, \"flat_labels\": [ \"catalogsource_operators_coreos_com/update=redhat-marketplace\" ] }, \"message\": \"time=\\\"2020-09-23T20:47:03Z\\\" level=info msg=\\\"serving registry\\\" database=/database/index.db port=50051\", \"level\": \"unknown\", \"hostname\": \"ip-10-0-182-28.internal\", \"pipeline_metadata\": { \"collector\": { \"ipaddr4\": \"10.0.182.28\", \"inputname\": \"fluent-plugin-systemd\", \"name\": \"fluentd\", \"received_at\": \"2020-09-23T20:47:15.007583+00:00\", \"version\": \"1.7.4 1.6.0\" } }, \"@timestamp\": \"2020-09-23T20:47:03.422465+00:00\", \"viaq_msg_id\": \"YmJmYTBlNDktMDMGQtMjE3NmFiOGUyOWM3\", \"openshift\": { \"labels\": { \"logging\": \"infra\" } } }, \"fields\": { \"@timestamp\": [ \"2020-09-23T20:47:03.422Z\" ], \"pipeline_metadata.collector.received_at\": [ \"2020-09-23T20:47:15.007Z\" ] }, \"sort\": [ 1600894023422 ] }",
"oc -n openshift-logging edit ClusterLogging instance",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: openshift-logging spec: managementState: \"Managed\" logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 resources: 1 limits: memory: 16Gi requests: cpu: 200m memory: 16Gi storage: storageClassName: \"gp2\" size: \"200G\" redundancyPolicy: \"SingleRedundancy\" visualization: type: \"kibana\" kibana: resources: 2 limits: memory: 1Gi requests: cpu: 500m memory: 1Gi proxy: resources: 3 limits: memory: 100Mi requests: cpu: 100m memory: 100Mi replicas: 2 collection: resources: 4 limits: memory: 736Mi requests: cpu: 200m memory: 736Mi type: fluentd",
"oc -n openshift-logging edit ClusterLogging instance",
"oc edit ClusterLogging instance apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: openshift-logging . spec: visualization: type: \"kibana\" kibana: replicas: 1 1",
"oc -n openshift-logging edit ClusterLogging instance",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: openshift-logging spec: managementState: \"Managed\" logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 resources: 1 limits: memory: 16Gi requests: cpu: 200m memory: 16Gi storage: storageClassName: \"gp2\" size: \"200G\" redundancyPolicy: \"SingleRedundancy\" visualization: type: \"kibana\" kibana: resources: 2 limits: memory: 1Gi requests: cpu: 500m memory: 1Gi proxy: resources: 3 limits: memory: 100Mi requests: cpu: 100m memory: 100Mi replicas: 2 collection: resources: 4 limits: memory: 736Mi requests: cpu: 200m memory: 736Mi type: fluentd",
"variant: openshift version: 4.12.0 metadata: name: 40-worker-custom-journald labels: machineconfiguration.openshift.io/role: \"worker\" storage: files: - path: /etc/systemd/journald.conf mode: 0644 1 overwrite: true contents: inline: | Compress=yes 2 ForwardToConsole=no 3 ForwardToSyslog=no MaxRetentionSec=1month 4 RateLimitBurst=10000 5 RateLimitIntervalSec=30s Storage=persistent 6 SyncIntervalSec=1s 7 SystemMaxUse=8G 8 SystemKeepFree=20% 9 SystemMaxFileSize=10M 10",
"butane 40-worker-custom-journald.bu -o 40-worker-custom-journald.yaml",
"oc apply -f 40-worker-custom-journald.yaml",
"oc describe machineconfigpool/worker",
"Name: worker Namespace: Labels: machineconfiguration.openshift.io/mco-built-in= Annotations: <none> API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfigPool Conditions: Message: Reason: All nodes are updating to rendered-worker-913514517bcea7c93bd446f4830bc64e",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: collection: logs: type: vector vector: {}",
"oc adm policy add-cluster-role-to-user <cluster_role_name> system:serviceaccount:<namespace_name>:<service_account_name>",
"{\"level\":\"info\",\"name\":\"fred\",\"home\":\"bedrock\"}",
"pipelines: - inputRefs: [ application ] outputRefs: myFluentd parse: json",
"{\"structured\": { \"level\": \"info\", \"name\": \"fred\", \"home\": \"bedrock\" }, \"more fields...\"}",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: outputDefaults: elasticsearch: structuredTypeKey: kubernetes.labels.logFormat 1 structuredTypeName: nologformat pipelines: - inputRefs: - application outputRefs: - default parse: json 2",
"{ \"structured\":{\"name\":\"fred\",\"home\":\"bedrock\"}, \"kubernetes\":{\"labels\":{\"logFormat\": \"apache\", ...}} }",
"{ \"structured\":{\"name\":\"wilma\",\"home\":\"bedrock\"}, \"kubernetes\":{\"labels\":{\"logFormat\": \"google\", ...}} }",
"outputDefaults: elasticsearch: structuredTypeKey: openshift.labels.myLabel 1 structuredTypeName: nologformat pipelines: - name: application-logs inputRefs: - application - audit outputRefs: - elasticsearch-secure - default parse: json labels: myLabel: myValue 2",
"{ \"structured\":{\"name\":\"fred\",\"home\":\"bedrock\"}, \"openshift\":{\"labels\":{\"myLabel\": \"myValue\", ...}} }",
"outputDefaults: elasticsearch: structuredTypeKey: <log record field> structuredTypeName: <name> pipelines: - inputRefs: - application outputRefs: default parse: json",
"oc create -f <filename>.yaml",
"oc delete pod --selector logging-infra=collector",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputDefaults: elasticsearch: structuredTypeKey: kubernetes.labels.logFormat 1 structuredTypeName: nologformat enableStructuredContainerLogs: true 2 pipelines: - inputRefs: - application name: application-logs outputRefs: - default parse: json",
"apiVersion: v1 kind: Pod metadata: annotations: containerType.logging.openshift.io/heavy: heavy 1 containerType.logging.openshift.io/low: low spec: containers: - name: heavy 2 image: heavyimage - name: low image: lowimage",
"apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 outputs: - name: elasticsearch-secure 4 type: \"elasticsearch\" url: https://elasticsearch.secure.com:9200 secret: name: elasticsearch - name: elasticsearch-insecure 5 type: \"elasticsearch\" url: http://elasticsearch.insecure.com:9200 - name: kafka-app 6 type: \"kafka\" url: tls://kafka.secure.com:9093/app-topic inputs: 7 - name: my-app-logs application: namespaces: - my-project pipelines: - name: audit-logs 8 inputRefs: - audit outputRefs: - elasticsearch-secure - default labels: secure: \"true\" 9 datacenter: \"east\" - name: infrastructure-logs 10 inputRefs: - infrastructure outputRefs: - elasticsearch-insecure labels: datacenter: \"west\" - name: my-app 11 inputRefs: - my-app-logs outputRefs: - default - inputRefs: 12 - application outputRefs: - kafka-app labels: datacenter: \"south\"",
"oc create secret generic -n <namespace> <secret_name> --from-file=ca-bundle.crt=<your_bundle_file> --from-literal=username=<your_username> --from-literal=password=<your_password>",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 pipelines: - inputRefs: - <log_type> 4 outputRefs: - <output_name> 5 outputs: - name: <output_name> 6 type: <output_type> 7 url: <log_output_url> 8",
"java.lang.NullPointerException: Cannot invoke \"String.toString()\" because \"<param1>\" is null at testjava.Main.handle(Main.java:47) at testjava.Main.printMe(Main.java:19) at testjava.Main.main(Main.java:10)",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: pipelines: - name: my-app-logs inputRefs: - application outputRefs: - default detectMultilineErrors: true",
"[transforms.detect_exceptions_app-logs] type = \"detect_exceptions\" inputs = [\"application\"] languages = [\"All\"] group_by = [\"kubernetes.namespace_name\",\"kubernetes.pod_name\",\"kubernetes.container_name\"] expire_after_ms = 2000 multiline_flush_interval_ms = 1000",
"<label @MULTILINE_APP_LOGS> <match kubernetes.**> @type detect_exceptions remove_tag_prefix 'kubernetes' message message force_line_breaks true multiline_flush_interval .2 </match> </label>",
"oc -n openshift-logging create secret generic gcp-secret --from-file google-application-credentials.json= <your_service_account_key_file.json>",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 outputs: - name: gcp-1 type: googleCloudLogging secret: name: gcp-secret googleCloudLogging: projectId : \"openshift-gce-devel\" 4 logId : \"app-gcp\" 5 pipelines: - name: test-app inputRefs: 6 - application outputRefs: - gcp-1",
"oc -n openshift-logging create secret generic vector-splunk-secret --from-literal hecToken=<HEC_Token>",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 outputs: - name: splunk-receiver 4 secret: name: vector-splunk-secret 5 type: splunk 6 url: <http://your.splunk.hec.url:8088> 7 pipelines: 8 - inputRefs: - application - infrastructure name: 9 outputRefs: - splunk-receiver 10",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 outputs: - name: httpout-app type: http url: 4 http: headers: 5 h1: v1 h2: v2 method: POST secret: name: 6 tls: insecureSkipVerify: 7 pipelines: - name: inputRefs: - application outputRefs: - httpout-app 8",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: outputs: - name: fluentd-server-secure 3 type: fluentdForward 4 url: 'tls://fluentdserver.security.example.com:24224' 5 secret: 6 name: fluentd-secret - name: fluentd-server-insecure type: fluentdForward url: 'tcp://fluentdserver.home.example.com:24224' inputs: 7 - name: my-app-logs application: namespaces: - my-project 8 pipelines: - name: forward-to-fluentd-insecure 9 inputRefs: 10 - my-app-logs outputRefs: 11 - fluentd-server-insecure labels: project: \"my-project\" 12 - name: forward-to-fluentd-secure 13 inputRefs: - application 14 - audit - infrastructure outputRefs: - fluentd-server-secure - default labels: clusterId: \"C1234\"",
"oc apply -f <filename>.yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: pipelines: - inputRefs: [ myAppLogData ] 3 outputRefs: [ default ] 4 inputs: 5 - name: myAppLogData application: selector: matchLabels: 6 environment: production app: nginx namespaces: 7 - app1 - app2 outputs: 8 - <output_name>",
"- inputRefs: [ myAppLogData, myOtherAppLogData ]",
"oc create -f <file-name>.yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: pipelines: - name: my-pipeline inputRefs: audit 1 filterRefs: my-policy 2 outputRefs: default filters: - name: my-policy type: kubeAPIAudit kubeAPIAudit: # Don't generate audit events for all requests in RequestReceived stage. omitStages: - \"RequestReceived\" rules: # Log pod changes at RequestResponse level - level: RequestResponse resources: - group: \"\" resources: [\"pods\"] # Log \"pods/log\", \"pods/status\" at Metadata level - level: Metadata resources: - group: \"\" resources: [\"pods/log\", \"pods/status\"] # Don't log requests to a configmap called \"controller-leader\" - level: None resources: - group: \"\" resources: [\"configmaps\"] resourceNames: [\"controller-leader\"] # Don't log watch requests by the \"system:kube-proxy\" on endpoints or services - level: None users: [\"system:kube-proxy\"] verbs: [\"watch\"] resources: - group: \"\" # core API group resources: [\"endpoints\", \"services\"] # Don't log authenticated requests to certain non-resource URL paths. - level: None userGroups: [\"system:authenticated\"] nonResourceURLs: - \"/api*\" # Wildcard matching. - \"/version\" # Log the request body of configmap changes in kube-system. - level: Request resources: - group: \"\" # core API group resources: [\"configmaps\"] # This rule only applies to resources in the \"kube-system\" namespace. # The empty string \"\" can be used to select non-namespaced resources. namespaces: [\"kube-system\"] # Log configmap and secret changes in all other namespaces at the Metadata level. - level: Metadata resources: - group: \"\" # core API group resources: [\"secrets\", \"configmaps\"] # Log all other resources in core and extensions at the Request level. - level: Request resources: - group: \"\" # core API group - group: \"extensions\" # Version of group should NOT be included. # A catch-all rule to log all other requests at the Metadata level. - level: Metadata",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 outputs: - name: loki-insecure 4 type: \"loki\" 5 url: http://loki.insecure.com:3100 6 loki: tenantKey: kubernetes.namespace_name labelKeys: - kubernetes.labels.foo - name: loki-secure 7 type: \"loki\" url: https://loki.secure.com:3100 secret: name: loki-secret 8 loki: tenantKey: kubernetes.namespace_name 9 labelKeys: - kubernetes.labels.foo 10 pipelines: - name: application-logs 11 inputRefs: 12 - application - audit outputRefs: 13 - loki-secure",
"oc apply -f <filename>.yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 outputs: - name: elasticsearch-example 4 type: elasticsearch 5 elasticsearch: version: 8 6 url: http://elasticsearch.example.com:9200 7 secret: name: es-secret 8 pipelines: - name: application-logs 9 inputRefs: 10 - application - audit outputRefs: - elasticsearch-example 11 - default 12 labels: myLabel: \"myValue\" 13",
"oc apply -f <filename>.yaml",
"apiVersion: v1 kind: Secret metadata: name: openshift-test-secret data: username: <username> password: <password>",
"oc create secret -n openshift-logging openshift-test-secret.yaml",
"kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputs: - name: elasticsearch type: \"elasticsearch\" url: https://elasticsearch.secure.com:9200 secret: name: openshift-test-secret",
"oc apply -f <filename>.yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: outputs: - name: fluentd-server-secure 3 type: fluentdForward 4 url: 'tls://fluentdserver.security.example.com:24224' 5 secret: 6 name: fluentd-secret - name: fluentd-server-insecure type: fluentdForward url: 'tcp://fluentdserver.home.example.com:24224' pipelines: - name: forward-to-fluentd-secure 7 inputRefs: 8 - application - audit outputRefs: - fluentd-server-secure 9 - default 10 labels: clusterId: \"C1234\" 11 - name: forward-to-fluentd-insecure 12 inputRefs: - infrastructure outputRefs: - fluentd-server-insecure labels: clusterId: \"C1234\"",
"oc create -f <file-name>.yaml",
"input { tcp { codec => fluent { nanosecond_precision => true } port => 24114 } } filter { } output { stdout { codec => rubydebug } }",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 outputs: - name: rsyslog-east 4 type: syslog 5 syslog: 6 facility: local0 rfc: RFC3164 payloadKey: message severity: informational url: 'tls://rsyslogserver.east.example.com:514' 7 secret: 8 name: syslog-secret - name: rsyslog-west type: syslog syslog: appName: myapp facility: user msgID: mymsg procID: myproc rfc: RFC5424 severity: debug url: 'tcp://rsyslogserver.west.example.com:514' pipelines: - name: syslog-east 9 inputRefs: 10 - audit - application outputRefs: 11 - rsyslog-east - default 12 labels: secure: \"true\" 13 syslog: \"east\" - name: syslog-west 14 inputRefs: - infrastructure outputRefs: - rsyslog-west - default labels: syslog: \"west\"",
"oc create -f <filename>.yaml",
"spec: outputs: - name: syslogout syslog: addLogSource: true facility: user payloadKey: message rfc: RFC3164 severity: debug tag: mytag type: syslog url: tls://syslog-receiver.openshift-logging.svc:24224 pipelines: - inputRefs: - application name: test-app outputRefs: - syslogout",
"<15>1 2020-11-15T17:06:14+00:00 fluentd-9hkb4 mytag - - - {\"msgcontent\"=>\"Message Contents\", \"timestamp\"=>\"2020-11-15 17:06:09\", \"tag_key\"=>\"rec_tag\", \"index\"=>56}",
"<15>1 2020-11-16T10:49:37+00:00 crc-j55b9-master-0 mytag - - - namespace_name=clo-test-6327,pod_name=log-generator-ff9746c49-qxm7l,container_name=log-generator,message={\"msgcontent\":\"My life is my message\", \"timestamp\":\"2020-11-16 10:49:36\", \"tag_key\":\"rec_tag\", \"index\":76}",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 outputs: - name: app-logs 4 type: kafka 5 url: tls://kafka.example.devlab.com:9093/app-topic 6 secret: name: kafka-secret 7 - name: infra-logs type: kafka url: tcp://kafka.devlab2.example.com:9093/infra-topic 8 - name: audit-logs type: kafka url: tls://kafka.qelab.example.com:9093/audit-topic secret: name: kafka-secret-qe pipelines: - name: app-topic 9 inputRefs: 10 - application outputRefs: 11 - app-logs labels: logType: \"application\" 12 - name: infra-topic 13 inputRefs: - infrastructure outputRefs: - infra-logs labels: logType: \"infra\" - name: audit-topic inputRefs: - audit outputRefs: - audit-logs labels: logType: \"audit\"",
"spec: outputs: - name: app-logs type: kafka secret: name: kafka-secret-dev kafka: 1 brokers: 2 - tls://kafka-broker1.example.com:9093/ - tls://kafka-broker2.example.com:9093/ topic: app-topic 3",
"oc apply -f <filename>.yaml",
"apiVersion: v1 kind: Secret metadata: name: cw-secret namespace: openshift-logging data: aws_access_key_id: QUtJQUlPU0ZPRE5ON0VYQU1QTEUK aws_secret_access_key: d0phbHJYVXRuRkVNSS9LN01ERU5HL2JQeFJmaUNZRVhBTVBMRUtFWQo=",
"oc apply -f cw-secret.yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 outputs: - name: cw 4 type: cloudwatch 5 cloudwatch: groupBy: logType 6 groupPrefix: <group prefix> 7 region: us-east-2 8 secret: name: cw-secret 9 pipelines: - name: infra-logs 10 inputRefs: 11 - infrastructure - audit - application outputRefs: - cw 12",
"oc create -f <file-name>.yaml",
"oc get Infrastructure/cluster -ojson | jq .status.infrastructureName \"mycluster-7977k\"",
"oc run busybox --image=busybox -- sh -c 'while true; do echo \"My life is my message\"; sleep 3; done' oc logs -f busybox My life is my message My life is my message My life is my message",
"oc get ns/app -ojson | jq .metadata.uid \"794e1e1a-b9f5-4958-a190-e76a9b53d7bf\"",
"apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputs: - name: cw type: cloudwatch cloudwatch: groupBy: logType region: us-east-2 secret: name: cw-secret pipelines: - name: all-logs inputRefs: - infrastructure - audit - application outputRefs: - cw",
"aws --output json logs describe-log-groups | jq .logGroups[].logGroupName \"mycluster-7977k.application\" \"mycluster-7977k.audit\" \"mycluster-7977k.infrastructure\"",
"aws --output json logs describe-log-streams --log-group-name mycluster-7977k.application | jq .logStreams[].logStreamName \"kubernetes.var.log.containers.busybox_app_busybox-da085893053e20beddd6747acdbaf98e77c37718f85a7f6a4facf09ca195ad76.log\"",
"aws --output json logs describe-log-streams --log-group-name mycluster-7977k.audit | jq .logStreams[].logStreamName \"ip-10-0-131-228.us-east-2.compute.internal.k8s-audit.log\" \"ip-10-0-131-228.us-east-2.compute.internal.linux-audit.log\" \"ip-10-0-131-228.us-east-2.compute.internal.openshift-audit.log\"",
"aws --output json logs describe-log-streams --log-group-name mycluster-7977k.infrastructure | jq .logStreams[].logStreamName \"ip-10-0-131-228.us-east-2.compute.internal.kubernetes.var.log.containers.apiserver-69f9fd9b58-zqzw5_openshift-oauth-apiserver_oauth-apiserver-453c5c4ee026fe20a6139ba6b1cdd1bed25989c905bf5ac5ca211b7cbb5c3d7b.log\" \"ip-10-0-131-228.us-east-2.compute.internal.kubernetes.var.log.containers.apiserver-797774f7c5-lftrx_openshift-apiserver_openshift-apiserver-ce51532df7d4e4d5f21c4f4be05f6575b93196336be0027067fd7d93d70f66a4.log\" \"ip-10-0-131-228.us-east-2.compute.internal.kubernetes.var.log.containers.apiserver-797774f7c5-lftrx_openshift-apiserver_openshift-apiserver-check-endpoints-82a9096b5931b5c3b1d6dc4b66113252da4a6472c9fff48623baee761911a9ef.log\"",
"aws logs get-log-events --log-group-name mycluster-7977k.application --log-stream-name kubernetes.var.log.containers.busybox_app_busybox-da085893053e20beddd6747acdbaf98e77c37718f85a7f6a4facf09ca195ad76.log { \"events\": [ { \"timestamp\": 1629422704178, \"message\": \"{\\\"docker\\\":{\\\"container_id\\\":\\\"da085893053e20beddd6747acdbaf98e77c37718f85a7f6a4facf09ca195ad76\\\"},\\\"kubernetes\\\":{\\\"container_name\\\":\\\"busybox\\\",\\\"namespace_name\\\":\\\"app\\\",\\\"pod_name\\\":\\\"busybox\\\",\\\"container_image\\\":\\\"docker.io/library/busybox:latest\\\",\\\"container_image_id\\\":\\\"docker.io/library/busybox@sha256:0f354ec1728d9ff32edcd7d1b8bbdfc798277ad36120dc3dc683be44524c8b60\\\",\\\"pod_id\\\":\\\"870be234-90a3-4258-b73f-4f4d6e2777c7\\\",\\\"host\\\":\\\"ip-10-0-216-3.us-east-2.compute.internal\\\",\\\"labels\\\":{\\\"run\\\":\\\"busybox\\\"},\\\"master_url\\\":\\\"https://kubernetes.default.svc\\\",\\\"namespace_id\\\":\\\"794e1e1a-b9f5-4958-a190-e76a9b53d7bf\\\",\\\"namespace_labels\\\":{\\\"kubernetes_io/metadata_name\\\":\\\"app\\\"}},\\\"message\\\":\\\"My life is my message\\\",\\\"level\\\":\\\"unknown\\\",\\\"hostname\\\":\\\"ip-10-0-216-3.us-east-2.compute.internal\\\",\\\"pipeline_metadata\\\":{\\\"collector\\\":{\\\"ipaddr4\\\":\\\"10.0.216.3\\\",\\\"inputname\\\":\\\"fluent-plugin-systemd\\\",\\\"name\\\":\\\"fluentd\\\",\\\"received_at\\\":\\\"2021-08-20T01:25:08.085760+00:00\\\",\\\"version\\\":\\\"1.7.4 1.6.0\\\"}},\\\"@timestamp\\\":\\\"2021-08-20T01:25:04.178986+00:00\\\",\\\"viaq_index_name\\\":\\\"app-write\\\",\\\"viaq_msg_id\\\":\\\"NWRjZmUyMWQtZjgzNC00MjI4LTk3MjMtNTk3NmY3ZjU4NDk1\\\",\\\"log_type\\\":\\\"application\\\",\\\"time\\\":\\\"2021-08-20T01:25:04+00:00\\\"}\", \"ingestionTime\": 1629422744016 },",
"cloudwatch: groupBy: logType groupPrefix: demo-group-prefix region: us-east-2",
"aws --output json logs describe-log-groups | jq .logGroups[].logGroupName \"demo-group-prefix.application\" \"demo-group-prefix.audit\" \"demo-group-prefix.infrastructure\"",
"cloudwatch: groupBy: namespaceName region: us-east-2",
"aws --output json logs describe-log-groups | jq .logGroups[].logGroupName \"mycluster-7977k.app\" \"mycluster-7977k.audit\" \"mycluster-7977k.infrastructure\"",
"cloudwatch: groupBy: namespaceUUID region: us-east-2",
"aws --output json logs describe-log-groups | jq .logGroups[].logGroupName \"mycluster-7977k.794e1e1a-b9f5-4958-a190-e76a9b53d7bf\" // uid of the \"app\" namespace \"mycluster-7977k.audit\" \"mycluster-7977k.infrastructure\"",
"oc create secret generic cw-sts-secret -n openshift-logging --from-literal=role_arn=arn:aws:iam::123456789012:role/my-role_with-permissions",
"apiVersion: v1 kind: Secret metadata: namespace: openshift-logging name: my-secret-name stringData: role_arn: arn:aws:iam::123456789012:role/my-role_with-permissions",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <your_role_name>-credrequest namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - action: - logs:PutLogEvents - logs:CreateLogGroup - logs:PutRetentionPolicy - logs:CreateLogStream - logs:DescribeLogGroups - logs:DescribeLogStreams effect: Allow resource: arn:aws:logs:*:*:* secretRef: name: <your_role_name> namespace: openshift-logging serviceAccountNames: - logcollector",
"ccoctl aws create-iam-roles --name=<name> --region=<aws_region> --credentials-requests-dir=<path_to_directory_with_list_of_credentials_requests>/credrequests --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com 1",
"oc apply -f output/manifests/openshift-logging-<your_role_name>-credentials.yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: clf-collector 3 outputs: - name: cw 4 type: cloudwatch 5 cloudwatch: groupBy: logType 6 groupPrefix: <group prefix> 7 region: us-east-2 8 secret: name: <your_secret_name> 9 pipelines: - name: to-cloudwatch 10 inputRefs: 11 - infrastructure - audit - application outputRefs: - cw 12",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: spec: collection: type: <log_collector_type> 1 resources: {} tolerations: {}",
"oc apply -f <filename>.yaml",
"apiVersion: logging.openshift.io/v1alpha1 kind: LogFileMetricExporter metadata: name: instance namespace: openshift-logging spec: nodeSelector: {} 1 resources: 2 limits: cpu: 500m memory: 256Mi requests: cpu: 200m memory: 128Mi tolerations: [] 3",
"oc apply -f <filename>.yaml",
"oc get pods -l app.kubernetes.io/component=logfilesmetricexporter -n openshift-logging",
"NAME READY STATUS RESTARTS AGE logfilesmetricexporter-9qbjj 1/1 Running 0 2m46s logfilesmetricexporter-cbc4v 1/1 Running 0 2m46s",
"oc -n openshift-logging edit ClusterLogging instance",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: collection: type: fluentd resources: limits: 1 memory: 736Mi requests: cpu: 100m memory: 736Mi",
"apiVersion: logging.openshift.io/v1beta1 kind: ClusterLogForwarder metadata: spec: serviceAccountName: <service_account_name> inputs: - name: http-receiver 1 receiver: type: http 2 http: format: kubeAPIAudit 3 port: 8443 4 pipelines: 5 - name: http-pipeline inputRefs: - http-receiver",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: inputs: - name: http-receiver 1 receiver: type: http 2 http: format: kubeAPIAudit 3 port: 8443 4 pipelines: 5 - inputRefs: - http-receiver name: http-pipeline",
"oc apply -f <filename>.yaml",
"oc edit ClusterLogging instance",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: collection: fluentd: buffer: chunkLimitSize: 8m 1 flushInterval: 5s 2 flushMode: interval 3 flushThreadCount: 3 4 overflowAction: throw_exception 5 retryMaxInterval: \"300s\" 6 retryType: periodic 7 retryWait: 1s 8 totalLimitSize: 32m 9",
"oc get pods -l component=collector -n openshift-logging",
"oc extract configmap/collector-config --confirm",
"<buffer> @type file path '/var/lib/fluentd/default' flush_mode interval flush_interval 5s flush_thread_count 3 retry_type periodic retry_wait 1s retry_max_interval 300s retry_timeout 60m queued_chunks_limit_size \"#{ENV['BUFFER_QUEUE_LIMIT'] || '32'}\" total_limit_size \"#{ENV['TOTAL_LIMIT_SIZE_PER_BUFFER'] || '8589934592'}\" chunk_limit_size 8m overflow_action throw_exception disable_chunk_backup true </buffer>",
"apiVersion: template.openshift.io/v1 kind: Template metadata: name: eventrouter-template annotations: description: \"A pod forwarding kubernetes events to OpenShift Logging stack.\" tags: \"events,EFK,logging,cluster-logging\" objects: - kind: ServiceAccount 1 apiVersion: v1 metadata: name: eventrouter namespace: USD{NAMESPACE} - kind: ClusterRole 2 apiVersion: rbac.authorization.k8s.io/v1 metadata: name: event-reader rules: - apiGroups: [\"\"] resources: [\"events\"] verbs: [\"get\", \"watch\", \"list\"] - kind: ClusterRoleBinding 3 apiVersion: rbac.authorization.k8s.io/v1 metadata: name: event-reader-binding subjects: - kind: ServiceAccount name: eventrouter namespace: USD{NAMESPACE} roleRef: kind: ClusterRole name: event-reader - kind: ConfigMap 4 apiVersion: v1 metadata: name: eventrouter namespace: USD{NAMESPACE} data: config.json: |- { \"sink\": \"stdout\" } - kind: Deployment 5 apiVersion: apps/v1 metadata: name: eventrouter namespace: USD{NAMESPACE} labels: component: \"eventrouter\" logging-infra: \"eventrouter\" provider: \"openshift\" spec: selector: matchLabels: component: \"eventrouter\" logging-infra: \"eventrouter\" provider: \"openshift\" replicas: 1 template: metadata: labels: component: \"eventrouter\" logging-infra: \"eventrouter\" provider: \"openshift\" name: eventrouter spec: serviceAccount: eventrouter containers: - name: kube-eventrouter image: USD{IMAGE} imagePullPolicy: IfNotPresent resources: requests: cpu: USD{CPU} memory: USD{MEMORY} volumeMounts: - name: config-volume mountPath: /etc/eventrouter securityContext: allowPrivilegeEscalation: false capabilities: drop: [\"ALL\"] securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault volumes: - name: config-volume configMap: name: eventrouter parameters: - name: IMAGE 6 displayName: Image value: \"registry.redhat.io/openshift-logging/eventrouter-rhel8:v0.4\" - name: CPU 7 displayName: CPU value: \"100m\" - name: MEMORY 8 displayName: Memory value: \"128Mi\" - name: NAMESPACE displayName: Namespace value: \"openshift-logging\" 9",
"oc process -f <templatefile> | oc apply -n openshift-logging -f -",
"oc process -f eventrouter.yaml | oc apply -n openshift-logging -f -",
"serviceaccount/eventrouter created clusterrole.rbac.authorization.k8s.io/event-reader created clusterrolebinding.rbac.authorization.k8s.io/event-reader-binding created configmap/eventrouter created deployment.apps/eventrouter created",
"oc get pods --selector component=eventrouter -o name -n openshift-logging",
"pod/cluster-logging-eventrouter-d649f97c8-qvv8r",
"oc logs <cluster_logging_eventrouter_pod> -n openshift-logging",
"oc logs cluster-logging-eventrouter-d649f97c8-qvv8r -n openshift-logging",
"{\"verb\":\"ADDED\",\"event\":{\"metadata\":{\"name\":\"openshift-service-catalog-controller-manager-remover.1632d931e88fcd8f\",\"namespace\":\"openshift-service-catalog-removed\",\"selfLink\":\"/api/v1/namespaces/openshift-service-catalog-removed/events/openshift-service-catalog-controller-manager-remover.1632d931e88fcd8f\",\"uid\":\"787d7b26-3d2f-4017-b0b0-420db4ae62c0\",\"resourceVersion\":\"21399\",\"creationTimestamp\":\"2020-09-08T15:40:26Z\"},\"involvedObject\":{\"kind\":\"Job\",\"namespace\":\"openshift-service-catalog-removed\",\"name\":\"openshift-service-catalog-controller-manager-remover\",\"uid\":\"fac9f479-4ad5-4a57-8adc-cb25d3d9cf8f\",\"apiVersion\":\"batch/v1\",\"resourceVersion\":\"21280\"},\"reason\":\"Completed\",\"message\":\"Job completed\",\"source\":{\"component\":\"job-controller\"},\"firstTimestamp\":\"2020-09-08T15:40:26Z\",\"lastTimestamp\":\"2020-09-08T15:40:26Z\",\"count\":1,\"type\":\"Normal\"}}",
"apiVersion: v1 kind: Secret metadata: name: logging-loki-s3 namespace: openshift-logging stringData: access_key_id: AKIAIOSFODNN7EXAMPLE access_key_secret: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY bucketnames: s3-bucket-name endpoint: https://s3.eu-central-1.amazonaws.com region: eu-central-1",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki 1 namespace: openshift-logging spec: size: 1x.small 2 storage: schemas: - version: v12 effectiveDate: '2022-06-01' secret: name: logging-loki-s3 3 type: s3 4 storageClassName: <storage_class_name> 5 tenants: mode: openshift-logging 6",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat 1 spec: channel: stable 2 name: loki-operator source: redhat-operators 3 sourceNamespace: openshift-marketplace",
"oc apply -f <filename>.yaml",
"oc create secret generic -n openshift-logging <your_secret_name> --from-file=tls.key=<your_key_file> --from-file=tls.crt=<your_crt_file> --from-file=ca-bundle.crt=<your_bundle_file> --from-literal=username=<your_username> --from-literal=password=<your_password>",
"oc get secrets",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: size: 1x.small 1 storage: schemas: - version: v12 effectiveDate: \"2022-06-01\" secret: name: logging-loki-s3 2 type: s3 3 storageClassName: <storage_class_name> 4 tenants: mode: openshift-logging 5",
"oc apply -f <filename>.yaml",
"oc get pods -n openshift-logging",
"NAME READY STATUS RESTARTS AGE cluster-logging-operator-78fddc697-mnl82 1/1 Running 0 14m collector-6cglq 2/2 Running 0 45s collector-8r664 2/2 Running 0 45s collector-8z7px 2/2 Running 0 45s collector-pdxl9 2/2 Running 0 45s collector-tc9dx 2/2 Running 0 45s collector-xkd76 2/2 Running 0 45s logging-loki-compactor-0 1/1 Running 0 8m2s logging-loki-distributor-b85b7d9fd-25j9g 1/1 Running 0 8m2s logging-loki-distributor-b85b7d9fd-xwjs6 1/1 Running 0 8m2s logging-loki-gateway-7bb86fd855-hjhl4 2/2 Running 0 8m2s logging-loki-gateway-7bb86fd855-qjtlb 2/2 Running 0 8m2s logging-loki-index-gateway-0 1/1 Running 0 8m2s logging-loki-index-gateway-1 1/1 Running 0 7m29s logging-loki-ingester-0 1/1 Running 0 8m2s logging-loki-ingester-1 1/1 Running 0 6m46s logging-loki-querier-f5cf9cb87-9fdjd 1/1 Running 0 8m2s logging-loki-querier-f5cf9cb87-fp9v5 1/1 Running 0 8m2s logging-loki-query-frontend-58c579fcb7-lfvbc 1/1 Running 0 8m2s logging-loki-query-frontend-58c579fcb7-tjf9k 1/1 Running 0 8m2s logging-view-plugin-79448d8df6-ckgmx 1/1 Running 0 46s",
"oc create secret generic logging-loki-aws --from-literal=bucketnames=\"<bucket_name>\" --from-literal=endpoint=\"<aws_bucket_endpoint>\" --from-literal=access_key_id=\"<aws_access_key_id>\" --from-literal=access_key_secret=\"<aws_access_key_secret>\" --from-literal=region=\"<aws_region_of_your_bucket>\"",
"oc create secret generic logging-loki-azure --from-literal=container=\"<azure_container_name>\" --from-literal=environment=\"<azure_environment>\" \\ 1 --from-literal=account_name=\"<azure_account_name>\" --from-literal=account_key=\"<azure_account_key>\"",
"oc create secret generic logging-loki-gcs --from-literal=bucketname=\"<bucket_name>\" --from-file=key.json=\"<path/to/key.json>\"",
"oc create secret generic logging-loki-minio --from-literal=bucketnames=\"<bucket_name>\" --from-literal=endpoint=\"<minio_bucket_endpoint>\" --from-literal=access_key_id=\"<minio_access_key_id>\" --from-literal=access_key_secret=\"<minio_access_key_secret>\"",
"apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: loki-bucket-odf namespace: openshift-logging spec: generateBucketName: loki-bucket-odf storageClassName: openshift-storage.noobaa.io",
"BUCKET_HOST=USD(oc get -n openshift-logging configmap loki-bucket-odf -o jsonpath='{.data.BUCKET_HOST}') BUCKET_NAME=USD(oc get -n openshift-logging configmap loki-bucket-odf -o jsonpath='{.data.BUCKET_NAME}') BUCKET_PORT=USD(oc get -n openshift-logging configmap loki-bucket-odf -o jsonpath='{.data.BUCKET_PORT}')",
"ACCESS_KEY_ID=USD(oc get -n openshift-logging secret loki-bucket-odf -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 -d) SECRET_ACCESS_KEY=USD(oc get -n openshift-logging secret loki-bucket-odf -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 -d)",
"oc create -n openshift-logging secret generic logging-loki-odf --from-literal=access_key_id=\"<access_key_id>\" --from-literal=access_key_secret=\"<secret_access_key>\" --from-literal=bucketnames=\"<bucket_name>\" --from-literal=endpoint=\"https://<bucket_host>:<bucket_port>\"",
"oc create secret generic logging-loki-swift --from-literal=auth_url=\"<swift_auth_url>\" --from-literal=username=\"<swift_usernameclaim>\" --from-literal=user_domain_name=\"<swift_user_domain_name>\" --from-literal=user_domain_id=\"<swift_user_domain_id>\" --from-literal=user_id=\"<swift_user_id>\" --from-literal=password=\"<swift_password>\" --from-literal=domain_id=\"<swift_domain_id>\" --from-literal=domain_name=\"<swift_domain_name>\" --from-literal=container_name=\"<swift_container_name>\"",
"oc create secret generic logging-loki-swift --from-literal=auth_url=\"<swift_auth_url>\" --from-literal=username=\"<swift_usernameclaim>\" --from-literal=user_domain_name=\"<swift_user_domain_name>\" --from-literal=user_domain_id=\"<swift_user_domain_id>\" --from-literal=user_id=\"<swift_user_id>\" --from-literal=password=\"<swift_password>\" --from-literal=domain_id=\"<swift_domain_id>\" --from-literal=domain_name=\"<swift_domain_name>\" --from-literal=container_name=\"<swift_container_name>\" --from-literal=project_id=\"<swift_project_id>\" --from-literal=project_name=\"<swift_project_name>\" --from-literal=project_domain_id=\"<swift_project_domain_id>\" --from-literal=project_domain_name=\"<swift_project_domain_name>\" --from-literal=region=\"<swift_region>\"",
"apiVersion: v1 kind: Namespace metadata: name: openshift-operators-redhat 1 annotations: openshift.io/node-selector: \"\" labels: openshift.io/cluster-monitoring: \"true\" 2",
"oc apply -f <filename>.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-operators-redhat namespace: openshift-operators-redhat 1 spec: {}",
"oc apply -f <filename>.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: elasticsearch-operator namespace: openshift-operators-redhat 1 spec: channel: stable-x.y 2 installPlanApproval: Automatic 3 source: redhat-operators 4 sourceNamespace: openshift-marketplace name: elasticsearch-operator",
"oc apply -f <filename>.yaml",
"oc get csv -n --all-namespaces",
"NAMESPACE NAME DISPLAY VERSION REPLACES PHASE default elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded kube-node-lease elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded kube-public elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded kube-system elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded non-destructive-test elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded openshift-apiserver-operator elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded openshift-apiserver elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: spec: logStore: type: <log_store_type> 1 elasticsearch: 2 nodeCount: <integer> resources: {} storage: {} redundancyPolicy: <redundancy_type> 3 lokistack: 4 name: {}",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: managementState: Managed logStore: type: lokistack lokistack: name: logging-loki",
"oc apply -f <filename>.yaml",
"oc adm groups new cluster-admin",
"oc adm groups add-users cluster-admin <username>",
"oc adm policy add-cluster-role-to-group cluster-admin cluster-admin",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: template: ingester: podAntiAffinity: # requiredDuringSchedulingIgnoredDuringExecution: 1 - labelSelector: matchLabels: 2 app.kubernetes.io/component: ingester topologyKey: kubernetes.io/hostname",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: replicationFactor: 2 1 replication: factor: 2 2 zones: - maxSkew: 1 3 topologyKey: topology.kubernetes.io/zone 4",
"get pods --field-selector status.phase==Pending -n openshift-logging",
"NAME READY STATUS RESTARTS AGE 1 logging-loki-index-gateway-1 0/1 Pending 0 17m logging-loki-ingester-1 0/1 Pending 0 16m logging-loki-ruler-1 0/1 Pending 0 16m",
"get pvc -o=json -n openshift-logging | jq '.items[] | select(.status.phase == \"Pending\") | .metadata.name' -r",
"storage-logging-loki-index-gateway-1 storage-logging-loki-ingester-1 wal-logging-loki-ingester-1 storage-logging-loki-ruler-1 wal-logging-loki-ruler-1",
"delete pvc __<pvc_name>__ -n openshift-logging",
"delete pod __<pod_name>__ -n openshift-logging",
"patch pvc __<pvc_name>__ -p '{\"metadata\":{\"finalizers\":null}}' -n openshift-logging",
"kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: logging-all-application-logs-reader roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-logging-application-view 1 subjects: 2 - kind: Group name: system:authenticated apiGroup: rbac.authorization.k8s.io",
"kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: allow-read-logs namespace: log-test-0 1 roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-logging-application-view subjects: - kind: User apiGroup: rbac.authorization.k8s.io name: testuser-0",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: tenants: mode: openshift-logging 1 openshift: adminGroups: 2 - cluster-admin - custom-admin-group 3",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: 1 retention: 2 days: 20 streams: - days: 4 priority: 1 selector: '{kubernetes_namespace_name=~\"test.+\"}' 3 - days: 1 priority: 1 selector: '{log_type=\"infrastructure\"}' managementState: Managed replicationFactor: 1 size: 1x.small storage: schemas: - effectiveDate: \"2020-10-11\" version: v11 secret: name: logging-loki-s3 type: aws storageClassName: standard tenants: mode: openshift-logging",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: retention: days: 20 tenants: 1 application: retention: days: 1 streams: - days: 4 selector: '{kubernetes_namespace_name=~\"test.+\"}' 2 infrastructure: retention: days: 5 streams: - days: 1 selector: '{kubernetes_namespace_name=~\"openshift-cluster.+\"}' managementState: Managed replicationFactor: 1 size: 1x.small storage: schemas: - effectiveDate: \"2020-10-11\" version: v11 secret: name: logging-loki-s3 type: aws storageClassName: standard tenants: mode: openshift-logging",
"oc apply -f <filename>.yaml",
"\"values\":[[\"1630410392689800468\",\"{\\\"kind\\\":\\\"Event\\\",\\\"apiVersion\\\": .... ... ... ... \\\"received_at\\\":\\\"2021-08-31T11:46:32.800278+00:00\\\",\\\"version\\\":\\\"1.7.4 1.6.0\\\"}},\\\"@timestamp\\\":\\\"2021-08-31T11:46:32.799692+00:00\\\",\\\"viaq_index_name\\\":\\\"audit-write\\\",\\\"viaq_msg_id\\\":\\\"MzFjYjJkZjItNjY0MC00YWU4LWIwMTEtNGNmM2E5ZmViMGU4\\\",\\\"log_type\\\":\\\"audit\\\"}\"]]}]}",
"429 Too Many Requests Ingestion rate limit exceeded",
"2023-08-25T16:08:49.301780Z WARN sink{component_kind=\"sink\" component_id=default_loki_infra component_type=loki component_name=default_loki_infra}: vector::sinks::util::retries: Retrying after error. error=Server responded with an error: 429 Too Many Requests internal_log_rate_limit=true",
"2023-08-30 14:52:15 +0000 [warn]: [default_loki_infra] failed to flush the buffer. retry_times=2 next_retry_time=2023-08-30 14:52:19 +0000 chunk=\"604251225bf5378ed1567231a1c03b8b\" error_class=Fluent::Plugin::LokiOutput::LogPostError error=\"429 Too Many Requests Ingestion rate limit exceeded for user infrastructure (limit: 4194304 bytes/sec) while attempting to ingest '4082' lines totaling '7820025' bytes, reduce log volume or contact your Loki administrator to see if the limit can be increased\\n\"",
"level=warn ts=2023-08-30T14:57:34.155592243Z caller=grpc_logging.go:43 duration=1.434942ms method=/logproto.Pusher/Push err=\"rpc error: code = Code(429) desc = entry with timestamp 2023-08-30 14:57:32.012778399 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: ingestion: ingestionBurstSize: 16 1 ingestionRate: 8 2",
"oc patch LokiStack logging-loki -n openshift-logging --type=merge -p '{\"spec\": {\"hashRing\":{\"memberlist\":{\"instanceAddrType\":\"podIP\",\"type\": \"memberlist\"}}}}'",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: hashRing: type: memberlist memberlist: instanceAddrType: podIP",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: spec: logStore: type: <log_store_type> 1 elasticsearch: 2 nodeCount: <integer> resources: {} storage: {} redundancyPolicy: <redundancy_type> 3 lokistack: 4 name: {}",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: managementState: Managed logStore: type: lokistack lokistack: name: logging-loki",
"oc apply -f <filename>.yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: pipelines: 1 - name: all-to-default inputRefs: - infrastructure - application - audit outputRefs: - default",
"apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputs: - name: elasticsearch-insecure type: \"elasticsearch\" url: http://elasticsearch-insecure.messaging.svc.cluster.local insecure: true - name: elasticsearch-secure type: \"elasticsearch\" url: https://elasticsearch-secure.messaging.svc.cluster.local secret: name: es-audit - name: secureforward-offcluster type: \"fluentdForward\" url: https://secureforward.offcluster.com:24224 secret: name: secureforward pipelines: - name: container-logs inputRefs: - application outputRefs: - secureforward-offcluster - name: infra-logs inputRefs: - infrastructure outputRefs: - elasticsearch-insecure - name: audit-logs inputRefs: - audit outputRefs: - elasticsearch-secure - default 1",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" spec: managementState: \"Managed\" logStore: type: \"elasticsearch\" retentionPolicy: 1 application: maxAge: 1d infra: maxAge: 7d audit: maxAge: 7d elasticsearch: nodeCount: 3",
"apiVersion: \"logging.openshift.io/v1\" kind: \"Elasticsearch\" metadata: name: \"elasticsearch\" spec: indexManagement: policies: 1 - name: infra-policy phases: delete: minAge: 7d 2 hot: actions: rollover: maxAge: 8h 3 pollInterval: 15m 4",
"oc get cronjob",
"NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE elasticsearch-im-app */15 * * * * False 0 <none> 4s elasticsearch-im-audit */15 * * * * False 0 <none> 4s elasticsearch-im-infra */15 * * * * False 0 <none> 4s",
"oc edit ClusterLogging instance",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" . spec: logStore: type: \"elasticsearch\" elasticsearch: 1 resources: limits: 2 memory: \"32Gi\" requests: 3 cpu: \"1\" memory: \"16Gi\" proxy: 4 resources: limits: memory: 100Mi requests: memory: 100Mi",
"resources: limits: 1 memory: \"32Gi\" requests: 2 cpu: \"8\" memory: \"32Gi\"",
"oc -n openshift-logging edit ClusterLogging instance",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" . spec: logStore: type: \"elasticsearch\" elasticsearch: redundancyPolicy: \"SingleRedundancy\" 1",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" spec: logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: storageClassName: \"gp2\" size: \"200G\"",
"spec: logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: {}",
"oc project openshift-logging",
"oc get pods -l component=elasticsearch",
"oc -n openshift-logging patch daemonset/collector -p '{\"spec\":{\"template\":{\"spec\":{\"nodeSelector\":{\"logging-infra-collector\": \"false\"}}}}}'",
"oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query=\"_flush/synced\" -XPOST",
"oc exec -c elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query=\"_flush/synced\" -XPOST",
"{\"_shards\":{\"total\":4,\"successful\":4,\"failed\":0},\".security\":{\"total\":2,\"successful\":2,\"failed\":0},\".kibana_1\":{\"total\":2,\"successful\":2,\"failed\":0}}",
"oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query=\"_cluster/settings\" -XPUT -d '{ \"persistent\": { \"cluster.routing.allocation.enable\" : \"primaries\" } }'",
"oc exec elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query=\"_cluster/settings\" -XPUT -d '{ \"persistent\": { \"cluster.routing.allocation.enable\" : \"primaries\" } }'",
"{\"acknowledged\":true,\"persistent\":{\"cluster\":{\"routing\":{\"allocation\":{\"enable\":\"primaries\"}}}},\"transient\":",
"oc rollout resume deployment/<deployment-name>",
"oc rollout resume deployment/elasticsearch-cdm-0-1",
"deployment.extensions/elasticsearch-cdm-0-1 resumed",
"oc get pods -l component=elasticsearch-",
"NAME READY STATUS RESTARTS AGE elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6k 2/2 Running 0 22h elasticsearch-cdm-5ceex6ts-2-f799564cb-l9mj7 2/2 Running 0 22h elasticsearch-cdm-5ceex6ts-3-585968dc68-k7kjr 2/2 Running 0 22h",
"oc rollout pause deployment/<deployment-name>",
"oc rollout pause deployment/elasticsearch-cdm-0-1",
"deployment.extensions/elasticsearch-cdm-0-1 paused",
"oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query=_cluster/health?pretty=true",
"oc exec elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query=_cluster/health?pretty=true",
"{ \"cluster_name\" : \"elasticsearch\", \"status\" : \"yellow\", 1 \"timed_out\" : false, \"number_of_nodes\" : 3, \"number_of_data_nodes\" : 3, \"active_primary_shards\" : 8, \"active_shards\" : 16, \"relocating_shards\" : 0, \"initializing_shards\" : 0, \"unassigned_shards\" : 1, \"delayed_unassigned_shards\" : 0, \"number_of_pending_tasks\" : 0, \"number_of_in_flight_fetch\" : 0, \"task_max_waiting_in_queue_millis\" : 0, \"active_shards_percent_as_number\" : 100.0 }",
"oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query=\"_cluster/settings\" -XPUT -d '{ \"persistent\": { \"cluster.routing.allocation.enable\" : \"all\" } }'",
"oc exec elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query=\"_cluster/settings\" -XPUT -d '{ \"persistent\": { \"cluster.routing.allocation.enable\" : \"all\" } }'",
"{ \"acknowledged\" : true, \"persistent\" : { }, \"transient\" : { \"cluster\" : { \"routing\" : { \"allocation\" : { \"enable\" : \"all\" } } } } }",
"oc -n openshift-logging patch daemonset/collector -p '{\"spec\":{\"template\":{\"spec\":{\"nodeSelector\":{\"logging-infra-collector\": \"true\"}}}}}'",
"oc get service elasticsearch -o jsonpath={.spec.clusterIP} -n openshift-logging",
"172.30.183.229",
"oc get service elasticsearch -n openshift-logging",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE elasticsearch ClusterIP 172.30.183.229 <none> 9200/TCP 22h",
"oc exec elasticsearch-cdm-oplnhinv-1-5746475887-fj2f8 -n openshift-logging -- curl -tlsv1.2 --insecure -H \"Authorization: Bearer USD{token}\" \"https://172.30.183.229:9200/_cat/health\"",
"% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 29 100 29 0 0 108 0 --:--:-- --:--:-- --:--:-- 108",
"oc project openshift-logging",
"oc extract secret/elasticsearch --to=. --keys=admin-ca",
"admin-ca",
"apiVersion: route.openshift.io/v1 kind: Route metadata: name: elasticsearch namespace: openshift-logging spec: host: to: kind: Service name: elasticsearch tls: termination: reencrypt destinationCACertificate: | 1",
"cat ./admin-ca | sed -e \"s/^/ /\" >> <file-name>.yaml",
"oc create -f <file-name>.yaml",
"route.route.openshift.io/elasticsearch created",
"token=USD(oc whoami -t)",
"routeES=`oc get route elasticsearch -o jsonpath={.spec.host}`",
"curl -tlsv1.2 --insecure -H \"Authorization: Bearer USD{token}\" \"https://USD{routeES}\"",
"{ \"name\" : \"elasticsearch-cdm-i40ktba0-1\", \"cluster_name\" : \"elasticsearch\", \"cluster_uuid\" : \"0eY-tJzcR3KOdpgeMJo-MQ\", \"version\" : { \"number\" : \"6.8.1\", \"build_flavor\" : \"oss\", \"build_type\" : \"zip\", \"build_hash\" : \"Unknown\", \"build_date\" : \"Unknown\", \"build_snapshot\" : true, \"lucene_version\" : \"7.7.0\", \"minimum_wire_compatibility_version\" : \"5.6.0\", \"minimum_index_compatibility_version\" : \"5.0.0\" }, \"<tagline>\" : \"<for search>\" }",
"outputRefs: - default",
"oc edit ClusterLogging instance",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: \"openshift-logging\" spec: managementState: \"Managed\" collection: type: \"fluentd\" fluentd: {}",
"oc get pods -l component=collector -n openshift-logging",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: outputs: - name: kafka-example 1 type: kafka 2 limit: maxRecordsPerSecond: 1000000 3",
"oc apply -f <filename>.yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: inputs: - name: <input_name> 1 application: selector: matchLabels: { example: label } 2 containerLimit: maxRecordsPerSecond: 0 3",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: inputs: - name: <input_name> 1 application: namespaces: [ example-ns-1, example-ns-2 ] 2 containerLimit: maxRecordsPerSecond: 10 3 - name: <input_name> application: namespaces: [ test ] containerLimit: maxRecordsPerSecond: 1000",
"oc apply -f <filename>.yaml",
"kind: Node apiVersion: v1 metadata: name: ip-10-0-131-14.ec2.internal selfLink: /api/v1/nodes/ip-10-0-131-14.ec2.internal uid: 7bc2580a-8b8e-11e9-8e01-021ab4174c74 resourceVersion: '478704' creationTimestamp: '2019-06-10T14:46:08Z' labels: kubernetes.io/os: linux failure-domain.beta.kubernetes.io/zone: us-east-1a node.openshift.io/os_version: '4.5' node-role.kubernetes.io/worker: '' failure-domain.beta.kubernetes.io/region: us-east-1 node.openshift.io/os_id: rhcos beta.kubernetes.io/instance-type: m4.large kubernetes.io/hostname: ip-10-0-131-14 beta.kubernetes.io/arch: amd64 region: east 1 type: user-node #",
"apiVersion: v1 kind: Pod metadata: name: s1 # spec: nodeSelector: 1 region: east type: user-node #",
"apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster # spec: defaultNodeSelector: type=user-node,region=east #",
"apiVersion: v1 kind: Node metadata: name: ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 # labels: region: east type: user-node #",
"apiVersion: v1 kind: Pod metadata: name: s1 # spec: nodeSelector: region: east #",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod-s1 1/1 Running 0 20s 10.131.2.6 ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 <none> <none>",
"apiVersion: v1 kind: Namespace metadata: name: east-region annotations: openshift.io/node-selector: \"region=east\" #",
"apiVersion: v1 kind: Node metadata: name: ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 # labels: region: east type: user-node #",
"apiVersion: v1 kind: Pod metadata: namespace: east-region # spec: nodeSelector: region: east type: user-node #",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod-s1 1/1 Running 0 20s 10.131.2.6 ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 <none> <none>",
"apiVersion: v1 kind: Pod metadata: name: west-region # spec: nodeSelector: region: west #",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: template: compactor: 1 nodeSelector: node-role.kubernetes.io/infra: \"\" 2 distributor: nodeSelector: node-role.kubernetes.io/infra: \"\" gateway: nodeSelector: node-role.kubernetes.io/infra: \"\" indexGateway: nodeSelector: node-role.kubernetes.io/infra: \"\" ingester: nodeSelector: node-role.kubernetes.io/infra: \"\" querier: nodeSelector: node-role.kubernetes.io/infra: \"\" queryFrontend: nodeSelector: node-role.kubernetes.io/infra: \"\" ruler: nodeSelector: node-role.kubernetes.io/infra: \"\"",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: template: compactor: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved distributor: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved indexGateway: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved ingester: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved querier: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved queryFrontend: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved ruler: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved gateway: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved",
"oc explain lokistack.spec.template",
"KIND: LokiStack VERSION: loki.grafana.com/v1 RESOURCE: template <Object> DESCRIPTION: Template defines the resource/limits/tolerations/nodeselectors per component FIELDS: compactor <Object> Compactor defines the compaction component spec. distributor <Object> Distributor defines the distributor component spec.",
"oc explain lokistack.spec.template.compactor",
"KIND: LokiStack VERSION: loki.grafana.com/v1 RESOURCE: compactor <Object> DESCRIPTION: Compactor defines the compaction component spec. FIELDS: nodeSelector <map[string]string> NodeSelector defines the labels required by a node to schedule the component onto it.",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: <name> 1 namespace: <namespace> 2 spec: managementState: \"Managed\" collection: type: \"vector\" tolerations: - key: \"logging\" operator: \"Exists\" effect: \"NoExecute\" tolerationSeconds: 6000 resources: limits: memory: 1Gi requests: cpu: 100m memory: 1Gi nodeSelector: collector: needed",
"oc apply -f <filename>.yaml",
"oc get pods --selector component=collector -o wide -n <project_name>",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES collector-8d69v 1/1 Running 0 134m 10.130.2.30 master1.example.com <none> <none> collector-bd225 1/1 Running 0 134m 10.131.1.11 master2.example.com <none> <none> collector-cvrzs 1/1 Running 0 134m 10.130.0.21 master3.example.com <none> <none> collector-gpqg2 1/1 Running 0 134m 10.128.2.27 worker1.example.com <none> <none> collector-l9j7j 1/1 Running 0 134m 10.129.2.31 worker2.example.com <none> <none>",
"apiVersion: v1 kind: Node metadata: name: my-node # spec: taints: - effect: NoExecute key: key1 value: value1 #",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoExecute\" tolerationSeconds: 3600 #",
"apiVersion: v1 kind: Node metadata: annotations: machine.openshift.io/machine: openshift-machine-api/ci-ln-62s7gtb-f76d1-v8jxv-master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-cdc1ab7da414629332cc4c3926e6e59c name: my-node # spec: taints: - effect: NoSchedule key: node-role.kubernetes.io/master #",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: template: compactor: 1 nodeSelector: node-role.kubernetes.io/infra: \"\" 2 distributor: nodeSelector: node-role.kubernetes.io/infra: \"\" gateway: nodeSelector: node-role.kubernetes.io/infra: \"\" indexGateway: nodeSelector: node-role.kubernetes.io/infra: \"\" ingester: nodeSelector: node-role.kubernetes.io/infra: \"\" querier: nodeSelector: node-role.kubernetes.io/infra: \"\" queryFrontend: nodeSelector: node-role.kubernetes.io/infra: \"\" ruler: nodeSelector: node-role.kubernetes.io/infra: \"\"",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: template: compactor: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved distributor: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved indexGateway: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved ingester: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved querier: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved queryFrontend: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved ruler: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved gateway: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved",
"oc explain lokistack.spec.template",
"KIND: LokiStack VERSION: loki.grafana.com/v1 RESOURCE: template <Object> DESCRIPTION: Template defines the resource/limits/tolerations/nodeselectors per component FIELDS: compactor <Object> Compactor defines the compaction component spec. distributor <Object> Distributor defines the distributor component spec.",
"oc explain lokistack.spec.template.compactor",
"KIND: LokiStack VERSION: loki.grafana.com/v1 RESOURCE: compactor <Object> DESCRIPTION: Compactor defines the compaction component spec. FIELDS: nodeSelector <map[string]string> NodeSelector defines the labels required by a node to schedule the component onto it.",
"apiVersion: v1 kind: Pod metadata: name: collector-example namespace: openshift-logging spec: collection: type: vector tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists - effect: NoSchedule key: node.kubernetes.io/disk-pressure operator: Exists - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/pid-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/unschedulable operator: Exists",
"oc adm taint nodes <node_name> <key>=<value>:<effect>",
"oc adm taint nodes node1 collector=node:NoExecute",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: spec: collection: type: vector tolerations: - key: collector 1 operator: Exists 2 effect: NoExecute 3 tolerationSeconds: 6000 4 resources: limits: memory: 2Gi requests: cpu: 100m memory: 1Gi",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: <name> 1 namespace: <namespace> 2 spec: managementState: \"Managed\" collection: type: \"vector\" tolerations: - key: \"logging\" operator: \"Exists\" effect: \"NoExecute\" tolerationSeconds: 6000 resources: limits: memory: 1Gi requests: cpu: 100m memory: 1Gi nodeSelector: collector: needed",
"oc apply -f <filename>.yaml",
"oc get pods --selector component=collector -o wide -n <project_name>",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES collector-8d69v 1/1 Running 0 134m 10.130.2.30 master1.example.com <none> <none> collector-bd225 1/1 Running 0 134m 10.131.1.11 master2.example.com <none> <none> collector-cvrzs 1/1 Running 0 134m 10.130.0.21 master3.example.com <none> <none> collector-gpqg2 1/1 Running 0 134m 10.128.2.27 worker1.example.com <none> <none> collector-l9j7j 1/1 Running 0 134m 10.129.2.31 worker2.example.com <none> <none>",
"oc get subscription.operators.coreos.com serverless-operator -n openshift-serverless -o yaml | grep currentCSV",
"currentCSV: serverless-operator.v1.28.0",
"oc delete subscription.operators.coreos.com serverless-operator -n openshift-serverless",
"subscription.operators.coreos.com \"serverless-operator\" deleted",
"oc delete clusterserviceversion serverless-operator.v1.28.0 -n openshift-serverless",
"clusterserviceversion.operators.coreos.com \"serverless-operator.v1.28.0\" deleted"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html-single/logging/index
|
Chapter 1. Release notes for the Red Hat build of OpenTelemetry
|
Chapter 1. Release notes for the Red Hat build of OpenTelemetry 1.1. Red Hat build of OpenTelemetry overview Red Hat build of OpenTelemetry is based on the open source OpenTelemetry project , which aims to provide unified, standardized, and vendor-neutral telemetry data collection for cloud-native software. Red Hat build of OpenTelemetry product provides support for deploying and managing the OpenTelemetry Collector and simplifying the workload instrumentation. The OpenTelemetry Collector can receive, process, and forward telemetry data in multiple formats, making it the ideal component for telemetry processing and interoperability between telemetry systems. The Collector provides a unified solution for collecting and processing metrics, traces, and logs. The OpenTelemetry Collector has a number of features including the following: Data Collection and Processing Hub It acts as a central component that gathers telemetry data like metrics and traces from various sources. This data can be created from instrumented applications and infrastructure. Customizable telemetry data pipeline The OpenTelemetry Collector is designed to be customizable. It supports various processors, exporters, and receivers. Auto-instrumentation features Automatic instrumentation simplifies the process of adding observability to applications. Developers don't need to manually instrument their code for basic telemetry data. Here are some of the use cases for the OpenTelemetry Collector: Centralized data collection In a microservices architecture, the Collector can be deployed to aggregate data from multiple services. Data enrichment and processing Before forwarding data to analysis tools, the Collector can enrich, filter, and process this data. Multi-backend receiving and exporting The Collector can receive and send data to multiple monitoring and analysis platforms simultaneously. You can use the Red Hat build of OpenTelemetry in combination with the Red Hat OpenShift distributed tracing platform (Tempo) . Note Only supported features are documented. Undocumented features are currently unsupported. If you need assistance with a feature, contact Red Hat's support. 1.2. Release notes for Red Hat build of OpenTelemetry 3.5 The Red Hat build of OpenTelemetry 3.5 is provided through the Red Hat build of OpenTelemetry Operator 0.119.0 . Note The Red Hat build of OpenTelemetry 3.5 is based on the open source OpenTelemetry release 0.119.0. 1.2.1. New features and enhancements This update introduces the following enhancements: The following Technology Preview features reach General Availability: Host Metrics Receiver Kubelet Stats Receiver With this update, the OpenTelemetry Collector uses the OTLP HTTP Exporter to push logs to a LokiStack instance. With this update, the Operator automatically creates RBAC rules for the Kubernetes Events Receiver ( k8sevents ), Kubernetes Cluster Receiver ( k8scluster ), and Kubernetes Objects Receiver ( k8sobjects ) if the Operator has sufficient permissions. For more information, see "Creating the required RBAC resources automatically" in Configuring the Collector . 1.2.2. Deprecated functionality In the Red Hat build of OpenTelemetry 3.5, the Loki Exporter, which is a temporary Technology Preview feature, is deprecated. The Loki Exporter is planned to be removed in the Red Hat build of OpenTelemetry 3.6. If you currently use the Loki Exporter for the OpenShift Logging 6.1 or later, replace the Loki Exporter with the OTLP HTTP Exporter. Important The Loki Exporter is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 1.2.3. Bug fixes This update introduces the following bug fix: Before this update, manually created routes for the Collector services were unintentionally removed when the Operator pod was restarted. With this update, restarting the Operator pod does not result in the removal of the manually created routes. 1.3. Release notes for Red Hat build of OpenTelemetry 3.4 The Red Hat build of OpenTelemetry 3.4 is provided through the Red Hat build of OpenTelemetry Operator 0.113.0 . The Red Hat build of OpenTelemetry 3.4 is based on the open source OpenTelemetry release 0.113.0. 1.3.1. Technology Preview features This update introduces the following Technology Preview features: OpenTelemetry Protocol (OTLP) JSON File Receiver Count Connector Important Each of these features is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 1.3.2. New features and enhancements This update introduces the following enhancements: The following Technology Preview features reach General Availability: BearerTokenAuth Extension Kubernetes Attributes Processor Spanmetrics Connector You can use the instrumentation.opentelemetry.io/inject-sdk annotation with the Instrumentation custom resource to enable injection of the OpenTelemetry SDK environment variables into multi-container pods. 1.3.3. Removal notice In the Red Hat build of OpenTelemetry 3.4, the Logging Exporter has been removed from the Collector. As an alternative, you must use the Debug Exporter instead. Warning If you have the Logging Exporter configured, upgrading to the Red Hat build of OpenTelemetry 3.4 will cause crash loops. To avoid such issues, you must configure the Red Hat build of OpenTelemetry to use the Debug Exporter instead of the Logging Exporter before upgrading to the Red Hat build of OpenTelemetry 3.4. In the Red Hat build of OpenTelemetry 3.4, the Technology Preview Memory Ballast Extension has been removed. As an alternative, you can use the GOMEMLIMIT environment variable instead. 1.4. Release notes for Red Hat build of OpenTelemetry 3.3.1 The Red Hat build of OpenTelemetry is provided through the Red Hat build of OpenTelemetry Operator. The Red Hat build of OpenTelemetry 3.3.1 is based on the open source OpenTelemetry release 0.107.0. 1.4.1. Bug fixes This update introduces the following bug fix: Before this update, injection of the NGINX auto-instrumentation failed when copying the instrumentation libraries into the application container. With this update, the copy command is configured correctly, which fixes the issue. ( TRACING-4673 ) 1.5. Release notes for Red Hat build of OpenTelemetry 3.3 The Red Hat build of OpenTelemetry is provided through the Red Hat build of OpenTelemetry Operator. The Red Hat build of OpenTelemetry 3.3 is based on the open source OpenTelemetry release 0.107.0. 1.5.1. CVEs This release fixes the following CVEs: CVE-2024-6104 CVE-2024-42368 1.5.2. Technology Preview features This update introduces the following Technology Preview features: Group-by-Attributes Processor Transform Processor Routing Connector Prometheus Remote Write Exporter Exporting logs to the LokiStack log store Important Each of these features is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 1.5.3. New features and enhancements This update introduces the following enhancements: Collector dashboard for the internal Collector metrics and analyzing Collector health and performance. ( TRACING-3768 ) Support for automatically reloading certificates in both the OpenTelemetry Collector and instrumentation. ( TRACING-4186 ) 1.5.4. Bug fixes This update introduces the following bug fixes: Before this update, the ServiceMonitor object was failing to scrape operator metrics due to missing permissions for accessing the metrics endpoint. With this update, this issue is fixed by creating the ServiceMonitor custom resource when operator monitoring is enabled. ( TRACING-4288 ) Before this update, the Collector service and the headless service were both monitoring the same endpoints, which caused duplication of metrics collection and ServiceMonitor objects. With this update, this issue is fixed by not creating the headless service. ( OBSDA-773 ) 1.6. Release notes for Red Hat build of OpenTelemetry 3.2.2 The Red Hat build of OpenTelemetry is provided through the Red Hat build of OpenTelemetry Operator. 1.6.1. CVEs This release fixes the following CVEs: CVE-2023-2953 CVE-2024-28182 1.6.2. Bug fixes This update introduces the following bug fix: Before this update, secrets were perpetually generated on OpenShift Container Platform 4.16 because the operator tried to reconcile a new openshift.io/internal-registry-pull-secret-ref annotation for service accounts, causing a loop. With this update, the operator ignores this new annotation. ( TRACING-4435 ) 1.7. Release notes for Red Hat build of OpenTelemetry 3.2.1 The Red Hat build of OpenTelemetry is provided through the Red Hat build of OpenTelemetry Operator. 1.7.1. CVEs This release fixes the following CVEs: CVE-2024-25062 Upstream CVE-2024-36129 1.7.2. New features and enhancements This update introduces the following enhancement: Red Hat build of OpenTelemetry 3.2.1 is based on the open source OpenTelemetry release 0.102.1. 1.8. Release notes for Red Hat build of OpenTelemetry 3.2 The Red Hat build of OpenTelemetry is provided through the Red Hat build of OpenTelemetry Operator. 1.8.1. Technology Preview features This update introduces the following Technology Preview features: Host Metrics Receiver OIDC Auth Extension Kubernetes Cluster Receiver Kubernetes Events Receiver Kubernetes Objects Receiver Load-Balancing Exporter Kubelet Stats Receiver Cumulative to Delta Processor Forward Connector Journald Receiver Filelog Receiver File Storage Extension Important Each of these features is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 1.8.2. New features and enhancements This update introduces the following enhancement: Red Hat build of OpenTelemetry 3.2 is based on the open source OpenTelemetry release 0.100.0. 1.8.3. Deprecated functionality In Red Hat build of OpenTelemetry 3.2, use of empty values and null keywords in the OpenTelemetry Collector custom resource is deprecated and planned to be unsupported in a future release. Red Hat will provide bug fixes and support for this syntax during the current release lifecycle, but this syntax will become unsupported. As an alternative to empty values and null keywords, you can update the OpenTelemetry Collector custom resource to contain empty JSON objects as open-closed braces {} instead. 1.8.4. Bug fixes This update introduces the following bug fix: Before this update, the checkbox to enable Operator monitoring was not available in the web console when installing the Red Hat build of OpenTelemetry Operator. As a result, a ServiceMonitor resource was not created in the openshift-opentelemetry-operator namespace. With this update, the checkbox appears for the Red Hat build of OpenTelemetry Operator in the web console so that Operator monitoring can be enabled during installation. ( TRACING-3761 ) 1.9. Release notes for Red Hat build of OpenTelemetry 3.1.1 The Red Hat build of OpenTelemetry is provided through the Red Hat build of OpenTelemetry Operator. 1.9.1. CVEs This release fixes CVE-2023-39326 . 1.10. Release notes for Red Hat build of OpenTelemetry 3.1 The Red Hat build of OpenTelemetry is provided through the Red Hat build of OpenTelemetry Operator. 1.10.1. Technology Preview features This update introduces the following Technology Preview feature: The target allocator is an optional component of the OpenTelemetry Operator that shards Prometheus receiver scrape targets across the deployed fleet of OpenTelemetry Collector instances. The target allocator provides integration with the Prometheus PodMonitor and ServiceMonitor custom resources. Important The target allocator is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 1.10.2. New features and enhancements This update introduces the following enhancement: Red Hat build of OpenTelemetry 3.1 is based on the open source OpenTelemetry release 0.93.0. 1.11. Release notes for Red Hat build of OpenTelemetry 3.0 1.11.1. New features and enhancements This update introduces the following enhancements: Red Hat build of OpenTelemetry 3.0 is based on the open source OpenTelemetry release 0.89.0. The OpenShift distributed tracing data collection Operator is renamed as the Red Hat build of OpenTelemetry Operator . Support for the ARM architecture. Support for the Prometheus receiver for metrics collection. Support for the Kafka receiver and exporter for sending traces and metrics to Kafka. Support for cluster-wide proxy environments. The Red Hat build of OpenTelemetry Operator creates the Prometheus ServiceMonitor custom resource if the Prometheus exporter is enabled. The Operator enables the Instrumentation custom resource that allows injecting upstream OpenTelemetry auto-instrumentation libraries. 1.11.2. Removal notice In Red Hat build of OpenTelemetry 3.0, the Jaeger exporter has been removed. Bug fixes and support are provided only through the end of the 2.9 lifecycle. As an alternative to the Jaeger exporter for sending data to the Jaeger collector, you can use the OTLP exporter instead. 1.11.3. Bug fixes This update introduces the following bug fixes: Fixed support for disconnected environments when using the oc adm catalog mirror CLI command. 1.11.4. Known issues There is currently a known issue: Currently, the cluster monitoring of the Red Hat build of OpenTelemetry Operator is disabled due to a bug ( TRACING-3761 ). The bug is preventing the cluster monitoring from scraping metrics from the Red Hat build of OpenTelemetry Operator due to a missing label openshift.io/cluster-monitoring=true that is required for the cluster monitoring and service monitor object. Workaround You can enable the cluster monitoring as follows: Add the following label in the Operator namespace: oc label namespace openshift-opentelemetry-operator openshift.io/cluster-monitoring=true Create a service monitor, role, and role binding: apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: opentelemetry-operator-controller-manager-metrics-service namespace: openshift-opentelemetry-operator spec: endpoints: - bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token path: /metrics port: https scheme: https tlsConfig: insecureSkipVerify: true selector: matchLabels: app.kubernetes.io/name: opentelemetry-operator control-plane: controller-manager --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: otel-operator-prometheus namespace: openshift-opentelemetry-operator annotations: include.release.openshift.io/self-managed-high-availability: "true" include.release.openshift.io/single-node-developer: "true" rules: - apiGroups: - "" resources: - services - endpoints - pods verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: otel-operator-prometheus namespace: openshift-opentelemetry-operator annotations: include.release.openshift.io/self-managed-high-availability: "true" include.release.openshift.io/single-node-developer: "true" roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: otel-operator-prometheus subjects: - kind: ServiceAccount name: prometheus-k8s namespace: openshift-monitoring 1.12. Release notes for Red Hat build of OpenTelemetry 2.9.2 Important The Red Hat build of OpenTelemetry is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Red Hat build of OpenTelemetry 2.9.2 is based on the open source OpenTelemetry release 0.81.0. 1.12.1. CVEs This release fixes CVE-2023-46234 . 1.12.2. Known issues There is currently a known issue: Currently, you must manually set Operator maturity to Level IV, Deep Insights. ( TRACING-3431 ) 1.13. Release notes for Red Hat build of OpenTelemetry 2.9.1 Important The Red Hat build of OpenTelemetry is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Red Hat build of OpenTelemetry 2.9.1 is based on the open source OpenTelemetry release 0.81.0. 1.13.1. CVEs This release fixes CVE-2023-44487 . 1.13.2. Known issues There is currently a known issue: Currently, you must manually set Operator maturity to Level IV, Deep Insights. ( TRACING-3431 ) 1.14. Release notes for Red Hat build of OpenTelemetry 2.9 Important The Red Hat build of OpenTelemetry is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Red Hat build of OpenTelemetry 2.9 is based on the open source OpenTelemetry release 0.81.0. 1.14.1. New features and enhancements This release introduces the following enhancements for the Red Hat build of OpenTelemetry: Support OTLP metrics ingestion. The metrics can be forwarded and stored in the user-workload-monitoring via the Prometheus exporter. Support the Operator maturity Level IV, Deep Insights, which enables upgrading and monitoring of OpenTelemetry Collector instances and the Red Hat build of OpenTelemetry Operator. Report traces and metrics from remote clusters using OTLP or HTTP and HTTPS. Collect OpenShift Container Platform resource attributes via the resourcedetection processor. Support the managed and unmanaged states in the OpenTelemetryCollector custom resouce. 1.14.2. Known issues There is currently a known issue: Currently, you must manually set Operator maturity to Level IV, Deep Insights. ( TRACING-3431 ) 1.15. Release notes for Red Hat build of OpenTelemetry 2.8 Important The Red Hat build of OpenTelemetry is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Red Hat build of OpenTelemetry 2.8 is based on the open source OpenTelemetry release 0.74.0. 1.15.1. Bug fixes This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.16. Release notes for Red Hat build of OpenTelemetry 2.7 Important The Red Hat build of OpenTelemetry is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Red Hat build of OpenTelemetry 2.7 is based on the open source OpenTelemetry release 0.63.1. 1.16.1. Bug fixes This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.17. Release notes for Red Hat build of OpenTelemetry 2.6 Important The Red Hat build of OpenTelemetry is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Red Hat build of OpenTelemetry 2.6 is based on the open source OpenTelemetry release 0.60. 1.17.1. Bug fixes This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.18. Release notes for Red Hat build of OpenTelemetry 2.5 Important The Red Hat build of OpenTelemetry is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Red Hat build of OpenTelemetry 2.5 is based on the open source OpenTelemetry release 0.56. 1.18.1. New features and enhancements This update introduces the following enhancement: Support for collecting Kubernetes resource attributes to the Red Hat build of OpenTelemetry Operator. 1.18.2. Bug fixes This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.19. Release notes for Red Hat build of OpenTelemetry 2.4 Important The Red Hat build of OpenTelemetry is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Red Hat build of OpenTelemetry 2.4 is based on the open source OpenTelemetry release 0.49. 1.19.1. Bug fixes This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.20. Release notes for Red Hat build of OpenTelemetry 2.3 Important The Red Hat build of OpenTelemetry is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Red Hat build of OpenTelemetry 2.3.1 is based on the open source OpenTelemetry release 0.44.1. Red Hat build of OpenTelemetry 2.3.0 is based on the open source OpenTelemetry release 0.44.0. 1.20.1. Bug fixes This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.21. Release notes for Red Hat build of OpenTelemetry 2.2 Important The Red Hat build of OpenTelemetry is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Red Hat build of OpenTelemetry 2.2 is based on the open source OpenTelemetry release 0.42.0. 1.21.1. Technology Preview features The unsupported OpenTelemetry Collector components included in the 2.1 release are removed. 1.21.2. Bug fixes This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.22. Release notes for Red Hat build of OpenTelemetry 2.1 Important The Red Hat build of OpenTelemetry is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Red Hat build of OpenTelemetry 2.1 is based on the open source OpenTelemetry release 0.41.1. 1.22.1. Technology Preview features This release introduces a breaking change to how to configure certificates in the OpenTelemetry custom resource file. With this update, the ca_file moves under tls in the custom resource, as shown in the following examples. CA file configuration for OpenTelemetry version 0.33 spec: mode: deployment config: | exporters: jaeger: endpoint: jaeger-production-collector-headless.tracing-system.svc:14250 ca_file: "/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt" CA file configuration for OpenTelemetry version 0.41.1 spec: mode: deployment config: | exporters: jaeger: endpoint: jaeger-production-collector-headless.tracing-system.svc:14250 tls: ca_file: "/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt" 1.22.2. Bug fixes This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.23. Release notes for Red Hat build of OpenTelemetry 2.0 Important The Red Hat build of OpenTelemetry is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Red Hat build of OpenTelemetry 2.0 is based on the open source OpenTelemetry release 0.33.0. This release adds the Red Hat build of OpenTelemetry as a Technology Preview , which you install using the Red Hat build of OpenTelemetry Operator. Red Hat build of OpenTelemetry is based on the OpenTelemetry APIs and instrumentation. The Red Hat build of OpenTelemetry includes the OpenTelemetry Operator and Collector. You can use the Collector to receive traces in the OpenTelemetry or Jaeger protocol and send the trace data to the Red Hat build of OpenTelemetry. Other capabilities of the Collector are not supported at this time. The OpenTelemetry Collector allows developers to instrument their code with vendor agnostic APIs, avoiding vendor lock-in and enabling a growing ecosystem of observability tooling. 1.24. Getting support If you experience difficulty with a procedure described in this documentation, or with OpenShift Container Platform in general, visit the Red Hat Customer Portal . From the Customer Portal, you can: Search or browse through the Red Hat Knowledgebase of articles and solutions relating to Red Hat products. Submit a support case to Red Hat Support. Access other product documentation. To identify issues with your cluster, you can use Insights in OpenShift Cluster Manager . Insights provides details about issues and, if available, information on how to solve a problem. If you have a suggestion for improving this documentation or have found an error, submit a Jira issue for the most relevant documentation component. Please provide specific details, such as the section name and OpenShift Container Platform version. 1.25. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
|
[
"apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: opentelemetry-operator-controller-manager-metrics-service namespace: openshift-opentelemetry-operator spec: endpoints: - bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token path: /metrics port: https scheme: https tlsConfig: insecureSkipVerify: true selector: matchLabels: app.kubernetes.io/name: opentelemetry-operator control-plane: controller-manager --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: otel-operator-prometheus namespace: openshift-opentelemetry-operator annotations: include.release.openshift.io/self-managed-high-availability: \"true\" include.release.openshift.io/single-node-developer: \"true\" rules: - apiGroups: - \"\" resources: - services - endpoints - pods verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: otel-operator-prometheus namespace: openshift-opentelemetry-operator annotations: include.release.openshift.io/self-managed-high-availability: \"true\" include.release.openshift.io/single-node-developer: \"true\" roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: otel-operator-prometheus subjects: - kind: ServiceAccount name: prometheus-k8s namespace: openshift-monitoring",
"spec: mode: deployment config: | exporters: jaeger: endpoint: jaeger-production-collector-headless.tracing-system.svc:14250 ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\"",
"spec: mode: deployment config: | exporters: jaeger: endpoint: jaeger-production-collector-headless.tracing-system.svc:14250 tls: ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\""
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/red_hat_build_of_opentelemetry/otel_rn
|
Chapter 4. Packaging software
|
Chapter 4. Packaging software In the following sections, learn the basics of the packaging process with the RPM package manager. 4.1. Setting up RPM packaging workspace To build RPM packages, you must first create a special workspace that consists of directories used for different packaging purposes. 4.1.1. Configuring RPM packaging workspace To configure the RPM packaging workspace, you can set up a directory layout by using the rpmdev-setuptree utility. Prerequisites You installed the rpmdevtools package, which provides utilities for packaging RPMs: Procedure Run the rpmdev-setuptree utility: Additional resources RPM packaging workspace directories 4.1.2. RPM packaging workspace directories The following are the RPM packaging workspace directories created by using the rpmdev-setuptree utility: Table 4.1. RPM packaging workspace directories Directory Purpose BUILD Contains build artifacts compiled from the source files from the SOURCES directory. RPMS Binary RPMs are created under the RPMS directory in subdirectories for different architectures. For example, in the x86_64 or noarch subdirectory. SOURCES Contains compressed source code archives and patches. The rpmbuild command then searches for these archives and patches in this directory. SPECS Contains spec files created by the packager. These files are then used for building packages. SRPMS When you use the rpmbuild command to build an SRPM instead of a binary RPM, the resulting SRPM is created under this directory. 4.2. About spec files A spec file is a file with instructions that the rpmbuild utility uses to build an RPM package. This file provides necessary information to the build system by defining instructions in a series of sections. These sections are defined in the Preamble and the Body part of the spec file: The Preamble section contains a series of metadata items that are used in the Body section. The Body section represents the main part of the instructions. 4.2.1. Preamble items The following are some of the directives that you can use in the Preamble section of the RPM spec file. Table 4.2. The Preamble section directives Directive Definition Name A base name of the package that must match the spec file name. Version An upstream version number of the software. Release The number of times the version of the package was released. Set the initial value to 1%{?dist} and increase it with each new release of the package. Reset to 1 when a new Version of the software is built. Summary A brief one-line summary of the package. License A license of the software being packaged. The exact format for how to label the License in your spec file varies depending on which RPM-based Linux distribution guidelines you are following, for example, GPLv3+ . URL A full URL for more information about the software, for example, an upstream project website for the software being packaged. Source A path or URL to the compressed archive of the unpatched upstream source code. This link must point to an accessible and reliable storage of the archive, for example, the upstream page, not the packager's local storage. You can apply the Source directive either with or without numbers at the end of the directive name. If there is no number given, the number is assigned to the entry internally. You can also give the numbers explicitly, for example, Source0 , Source1 , Source2 , Source3 , and so on. Patch A name of the first patch to apply to the source code, if necessary. You can apply the Patch directive either with or without numbers at the end of the directive name. If there is no number given, the number is assigned to the entry internally. You can also give the numbers explicitly, for example, Patch0 , Patch1 , Patch2 , Patch3 , and so on. You can apply the patches individually by using the %patch0 , %patch1 , %patch2 macro, and so on. Macros are applied within the %prep directive in the Body section of the RPM spec file. Alternatively, you can use the %autopatch macro that automatically applies all patches in the order they are given in the spec file. BuildArch An architecture that the software will be built for. If the software is not architecture-dependent, for example, if you wrote the software entirely in an interpreted programming language, set the value to BuildArch: noarch . If you do not set this value, the software automatically inherits the architecture of the machine on which it is built, for example, x86_64 . BuildRequires A comma- or whitespace-separated list of packages required to build the program written in a compiled language. There can be multiple entries of BuildRequires , each on its own line in the SPEC file. Requires A comma- or whitespace-separated list of packages required by the software to run once installed. There can be multiple entries of Requires , each on its own line in the spec file. ExcludeArch If a piece of software cannot operate on a specific processor architecture, you can exclude this architecture in the ExcludeArch directive. Conflicts A comma- or whitespace-separated list of packages that must not be installed on the system in order for your software to function properly when installed. There can be multiple entries of Conflicts , each on its own line in the spec file. Obsoletes The Obsoletes directive changes the way updates work depending on the following factors: If you use the rpm command directly on a command line, it removes all packages that match obsoletes of packages being installed, or the update is performed by an updates or dependency solver. If you use the updates or dependency resolver ( DNF ), packages containing matching Obsoletes: are added as updates and replace the matching packages. Provides If you add the Provides directive to the package, this package can be referred to by dependencies other than its name. The Name , Version , and Release ( NVR ) directives comprise the file name of the RPM package in the name-version-release format. You can display the NVR information for a specific package by querying RPM database by using the rpm command, for example: Here, bash is the package name, 4.4.19 is the version, and 7.el8 is the release. The x86_64 marker is the package architecture. Unlike NVR , the architecture marker is not under direct control of the RPM packager, but is defined by the rpmbuild build environment. The exception to this is the architecture-independent noarch package. 4.2.2. Body items The following are the items used in the Body section of the RPM spec file. Table 4.3. The Body section items Directive Definition %description A full description of the software packaged in the RPM. This description can span multiple lines and can be broken into paragraphs. %prep A command or series of commands to prepare the software for building, for example, for unpacking the archive in the Source directive. The %prep directive can contain a shell script. %build A command or series of commands for building the software into machine code (for compiled languages) or bytecode (for some interpreted languages). %install A command or series of commands that the rpmbuild utility will use to install the software into the BUILDROOT directory once the software has been built. These commands copy the desired build artifacts from the %_builddir directory, where the build happens, to the %buildroot directory that contains the directory structure with the files to be packaged. This includes copying files from ~/rpmbuild/BUILD to ~/rpmbuild/BUILDROOT and creating the necessary directories in ~/rpmbuild/BUILDROOT . The %install directory is an empty chroot base directory, which resembles the end user's root directory. Here you can create any directories that will contain the installed files. To create such directories, you can use RPM macros without having to hardcode the paths. Note that %install is only run when you create a package, not when you install it. For more information, see Working with spec files . %check A command or series of commands for testing the software, for example, unit tests. %files A list of files, provided by the RPM package, to be installed in the user's system and their full path location on the system. During the build, if there are files in the %buildroot directory that are not listed in %files , you will receive a warning about possible unpackaged files. Within the %files section, you can indicate the role of various files by using built-in macros. This is useful for querying the package file manifest metadata by using the rpm command. For example, to indicate that the LICENSE file is a software license file, use the %license macro. %changelog A record of changes that happened to the package between different Version or Release builds. These changes include a list of date-stamped entries for each Version-Release of the package. These entries log packaging changes, not software changes, for example, adding a patch or changing the build procedure in the %build section. 4.2.3. Advanced items A spec file can contain advanced items, such as Scriptlets or Triggers . Scriptlets and Triggers take effect at different points during the installation process on the end user's system, not the build process. 4.3. BuildRoots In the context of RPM packaging, buildroot is a chroot environment. The build artifacts are placed here by using the same file system hierarchy as the future hierarchy in the end user's system, with buildroot acting as the root directory. The placement of build artifacts must comply with the file system hierarchy standard of the end user's system. The files in buildroot are later put into a cpio archive, which becomes the main part of the RPM. When RPM is installed on the end user's system, these files are extracted in the root directory, preserving the correct hierarchy. Note The rpmbuild program has its own defaults. Overriding these defaults can cause certain issues. Therefore, avoid defining your own value of the buildroot macro. Use the default %{buildroot} macro instead. 4.4. RPM macros An rpm macro is a straight text substitution that can be conditionally assigned based on the optional evaluation of a statement when certain built-in functionality is used. Therefore, RPM can perform text substitutions for you. For example, you can define Version of the packaged software only once in the %{version} macro, and use this macro throughout the spec file. Every occurrence is automatically substituted by Version that you defined in the macro. Note If you see an unfamiliar macro, you can evaluate it with the following command: For example, to evaluate the %{_bindir} and %{_libexecdir} macros, enter: Additional resources More on macros 4.5. Working with spec files To package new software, you must create a spec file. You can create the spec file either of the following ways: Write the new spec file manually from scratch. Use the rpmdev-newspec utility. This utility creates an unpopulated spec file, where you fill the necessary directives and fields. Note Some programmer-focused text editors pre-populate a new spec file with their own spec template. The rpmdev-newspec utility provides an editor-agnostic method. 4.5.1. Creating a new spec file for sample Bash, Python, and C programs You can create a spec file for each of the three implementations of the Hello World! program by using the rpmdev-newspec utility. Prerequisites The following Hello World! program implementations were placed into the ~/rpmbuild/SOURCES directory: bello-0.1.tar.gz pello-0.1.2.tar.gz cello-1.0.tar.gz ( cello-output-first-patch.patch ) Procedure Navigate to the ~/rpmbuild/SPECS directory: Create a spec file for each of the three implementations of the Hello World! program: The ~/rpmbuild/SPECS/ directory now contains three spec files named bello.spec , cello.spec , and pello.spec . Examine the created files. The directives in the files represent those described in About spec files . In the following sections, you will populate particular section in the output files of rpmdev-newspec . 4.5.2. Modifying an original spec file The original output spec file generated by the rpmdev-newspec utility represents a template that you must modify to provide necessary instructions for the rpmbuild utility. rpmbuild then uses these instructions to build an RPM package. Prerequisites The unpopulated ~/rpmbuild/SPECS/<name>.spec spec file was created by using the rpmdev-newspec utility. For more information, see Creating a new spec file for sample Bash, Python, and C programs . Procedure Open the ~/rpmbuild/SPECS/<name>.spec file provided by the rpmdev-newspec utility. Populate the following directives of the spec file Preamble section: Name Name was already specified as an argument to rpmdev-newspec . Version Set Version to match the upstream release version of the source code. Release Release is automatically set to 1%{?dist} , which is initially 1 . Summary Enter a one-line explanation of the package. License Enter the software license associated with the source code. URL Enter the URL to the upstream software website. For consistency, utilize the %{name} RPM macro variable and use the https://example.com/%{name} format. Source Enter the URL to the upstream software source code. Link directly to the software version being packaged. Note The example URLs in this documentation include hard-coded values that could possibly change in the future. Similarly, the release version can change as well. To simplify these potential future changes, use the %{name} and %{version} macros. By using these macros, you need to update only one field in the spec file. BuildRequires Specify build-time dependencies for the package. Requires Specify run-time dependencies for the package. BuildArch Specify the software architecture. Populate the following directives of the spec file Body section. You can think of these directives as section headings, because these directives can define multi-line, multi-instruction, or scripted tasks to occur. %description Enter the full description of the software. %prep Enter a command or series of commands to prepare software for building. %build Enter a command or series of commands for building software. %install Enter a command or series of commands that instruct the rpmbuild command on how to install the software into the BUILDROOT directory. %files Specify the list of files, provided by the RPM package, to be installed on your system. %changelog Enter the list of datestamped entries for each Version-Release of the package. Start the first line of the %changelog section with an asterisk ( * ) character followed by Day-of-Week Month Day Year Name Surname <email> - Version-Release . For the actual change entry, follow these rules: Each change entry can contain multiple items, one for each change. Each item starts on a new line. Each item begins with a hyphen ( - ) character. You have now written an entire spec file for the required program. Additional resources Preamble items Body items An example spec file for a sample Bash program An example spec file for a sample Python program An example spec file for a sample C program Building RPMs 4.5.3. An example spec file for a sample Bash program You can use the following example spec file for the bello program written in bash for your reference. An example spec file for the bello program written in bash The BuildRequires directive, which specifies build-time dependencies for the package, was deleted because there is no building step for bello . Bash is a raw interpreted programming language, and the files are just installed to their location on the system. The Requires directive, which specifies run-time dependencies for the package, includes only bash , because the bello script requires only the bash shell environment to execute. The %build section, which specifies how to build the software, is blank, because the bash script does not need to be built. Note To install bello , you must create the destination directory and install the executable bash script file there. Therefore, you can use the install command in the %install section. You can use RPM macros to do this without hardcoding paths. Additional resources What is source code 4.5.4. An example SPEC file for a program written in Python You can use the following example spec file for the pello program written in the Python programming language for your reference. An example spec file for the pello program written in Python %global python3_pkgversion 3.11 1 Name: python-pello 2 Version: 1.0.2 Release: 1%{?dist} Summary: Example Python library License: MIT URL: https://github.com/fedora-python/Pello Source: %{url}/archive/v%{version}/Pello-%{version}.tar.gz BuildArch: noarch BuildRequires: python%{python3_pkgversion}-devel 3 # Build dependencies needed to be specified manually BuildRequires: python%{python3_pkgversion}-setuptools # Test dependencies needed to be specified manually # Also runtime dependencies need to be BuildRequired manually to run tests during build BuildRequires: python%{python3_pkgversion}-pytest >= 3 %global _description %{expand: Pello is an example package with an executable that prints Hello World! on the command line.} %description %_description %package -n python%{python3_pkgversion}-pello 4 Summary: %{summary} %description -n python%{python3_pkgversion}-pello %_description %prep %autosetup -p1 -n Pello-%{version} %build # The macro only supported projects with setup.py %py3_build 5 %install # The macro only supported projects with setup.py %py3_install %check 6 %{pytest} # Note that there is no %%files section for the unversioned python module %files -n python%{python3_pkgversion}-pello %doc README.md %license LICENSE.txt %{_bindir}/pello_greeting # The library files needed to be listed manually %{python3_sitelib}/pello/ # The metadata files needed to be listed manually %{python3_sitelib}/Pello-*.egg-info/ 1 By defining the python3_pkgversion macro, you set which Python version this package will be built for. To build for the default Python version 3.9, either set the macro to its default value 3 or remove the line entirely. 2 When packaging a Python project into RPM, always add the python- prefix to the original name of the project. The original name here is pello and, therefore, the name of the Source RPM (SRPM) is python-pello . 3 The BuildRequires directive specifies what packages are required to build and test this package. In BuildRequires , always include items providing tools necessary for building Python packages: python3-devel (or python3.11-devel or python3.12-devel ) and the relevant projects needed by the specific software that you package, for example, python3-setuptools (or python3.11-setuptools or python3.12-setuptools ) or the runtime and testing dependencies needed to run the tests in the %check section. 4 When choosing a name for the binary RPM (the package that users will be able to install), add a versioned Python prefix. Use the python3- prefix for the default Python 3.9, the python3.11- prefix for Python 3.11, or the python3.12- prefix for Python 3.12. You can use the %{python3_pkgversion} macro, which evaluates to 3 for the default Python version 3.9 unless you set it to an explicit version, for example, 3.11 (see footnote 1). 5 The %py3_build and %py3_install macros run the setup.py build and setup.py install commands, respectively, with additional arguments to specify installation locations, the interpreter to use, and other details. 6 The %check section should run the tests of the packaged project. The exact command depends on the project itself, but it is possible to use the %pytest macro to run the pytest command in an RPM-friendly way. 4.5.5. An example spec file for a sample C program You can use the following example spec file for the cello program that was written in the C programming language for your reference. An example spec file for the cello program written in C The BuildRequires directive, which specifies build-time dependencies for the package, includes the following packages required to perform the compilation build process: gcc make The Requires directive, which specifies run-time dependencies for the package, is omitted in this example. All runtime requirements are handled by rpmbuild , and the cello program does not require anything outside of the core C standard libraries. The %build section reflects the fact that in this example the Makefile file for the cello program was written. Therefore, you can use the GNU make command. However, you must remove the call to %configure because you did not provide a configure script. You can install the cello program by using the %make_install macro. This is possible because the Makefile file for the cello program is available. Additional resources What is source code 4.6. Building RPMs You can build RPM packages by using the rpmbuild command. When using this command, a certain directory and file structure is expected, which is the same as the structure that was set up by the rpmdev-setuptree utility. Different use cases and desired outcomes require different combinations of arguments to the rpmbuild command. The following are the main use cases: Building source RPMs. Building binary RPMs: Rebuilding a binary RPM from a source RPM. Building a binary RPM from the spec file. 4.6.1. Building source RPMs Building a Source RPM (SRPM) has the following advantages: You can preserve the exact source of a certain Name-Version-Release of an RPM file that was deployed to an environment. This includes the exact spec file, the source code, and all relevant patches. This is useful for tracking and debugging purposes. You can build a binary RPM on a different hardware platform or architecture. Prerequisites You have installed the rpmbuild utility on your system: The following Hello World! implementations were placed into the ~/rpmbuild/SOURCES/ directory: bello-0.1.tar.gz pello-0.1.2.tar.gz cello-1.0.tar.gz ( cello-output-first-patch.patch ) A spec file for the program that you want to package exists. Procedure Navigate to the ~/rpmbuild/SPECS/ directive, which contains the created spec file: Build the source RPM by entering the rpmbuild command with the specified spec file: The -bs option stands for the build source . For example, to build source RPMs for the bello , pello , and cello programs, enter: Verification Verify that the rpmbuild/SRPMS directory includes the resulting source RPMs. The directory is a part of the structure expected by rpmbuild . Additional resources Working with spec files Creating a new spec file for sample Bash, C, and Python programs Modifying an original spec file 4.6.2. Rebuilding a binary RPM from a source RPM To rebuild a binary RPM from a source RPM (SRPM), use the rpmbuild command with the --rebuild option. The output generated when creating the binary RPM is verbose, which is helpful for debugging. The output varies for different examples and corresponds to their spec files. The resulting binary RPMs are located in the ~/rpmbuild/RPMS/YOURARCH directory, where YOURARCH is your architecture, or in the ~/rpmbuild/RPMS/noarch/ directory, if the package is not architecture-specific. Prerequisites You have installed the rpmbuild utility on your system: Procedure Navigate to the ~/rpmbuild/SRPMS/ directive, which contains the source RPM: Rebuild the binary RPM from the source RPM: Replace srpm with the name of the source RPM file. For example, to rebuild bello , pello , and cello from their SRPMs, enter: Note Invoking rpmbuild --rebuild involves the following processes: Installing the contents of the SRPM (the spec file and the source code) into the ~/rpmbuild/ directory. Building an RPM by using the installed contents. Removing the spec file and the source code. You can retain the spec file and the source code after building either of the following ways: When building the RPM, use the rpmbuild command with the --recompile option instead of the --rebuild option. Install SRPMs for bello , pello , and cello : 4.6.3. Building a binary RPM from the spec file To build a binary RPM from its spec file, use the rpmbuild command with the -bb option. Prerequisites You have installed the rpmbuild utility on your system: Procedure Navigate to the ~/rpmbuild/SPECS/ directive, which contains spec files: Build the binary RPM from its spec : For example, to build bello , pello , and cello binary RPMs from their spec files, enter: 4.7. Checking RPMs for common errors After creating a package, you might want to check the quality of the package. The main tool for checking package quality is rpmlint . With the rpmlint tool, you can perform the following actions: Improve RPM maintainability. Enable content validation by performing static analysis of the RPM. Enable error checking by performing static analysis of the RPM. You can use rpmlint to check binary RPMs, source RPMs (SRPMs), and spec files. Therefore, this tool is useful for all stages of packaging. Note that rpmlint has strict guidelines. Therefore, it is sometimes acceptable to skip some of its errors and warnings as shown in the following sections. Note In the examples described in the following sections, rpmlint is run without any options, which produces a non-verbose output. For detailed explanations of each error or warning, run rpmlint -i instead. 4.7.1. Checking a sample Bash program for common errors In the following sections, investigate possible warnings and errors that can occur when checking an RPM for common errors on the example of the bello spec file and bello binary RPM. 4.7.1.1. Checking the bello spec file for common errors Inspect the outputs of the following examples to learn how to check a bello spec file for common errors. Output of running the rpmlint command on the bello spec file For bello.spec , there is only one invalid-url Source0 warning. This warning means that the URL listed in the Source0 directive is unreachable. This is expected, because the specified example.com URL does not exist. Assuming that this URL will be valid in the future, you can ignore this warning. Output of running the rpmlint command on the bello SRPM For the bello SRPM, there is a new invalid-url URL warning that means that the URL specified in the URL directive is unreachable. Assuming that this URL will be valid in the future, you can ignore this warning. 4.7.1.2. Checking the bello binary RPM for common errors When checking binary RPMs, the rpmlint command checks the following items: Documentation Manual pages Consistent use of the filesystem hierarchy standard Inspect the outputs of the following example to learn how to check a bello binary RPM for common errors. Output of running the rpmlint command on the bello binary RPM The no-documentation and no-manual-page-for-binary warnings mean that the RPM has no documentation or manual pages, because you did not provide any. Apart from the output warnings, the RPM passed rpmlint checks. 4.7.2. Checking a sample Python program for common errors In the following sections, investigate possible warnings and errors that can occur when validating RPM content on the example of the pello spec file and pello binary RPM. 4.7.2.1. Checking the pello spec file for common errors Inspect the outputs of the following examples to learn how to check a pello spec file for common errors. Output of running the rpmlint command on the pello spec file The invalid-url Source0 warning means that the URL listed in the Source0 directive is unreachable. This is expected, because the specified example.com URL does not exist. Assuming that this URL will be valid in the future, you can ignore this warning. The hardcoded-library-path errors suggest using the %{_libdir} macro instead of hard-coding the library path. For the sake of this example, you can safely ignore these errors. However, for packages going into production, check all errors carefully. Output of running the rpmlint command on the SRPM for pello The invalid-url URL error means that the URL mentioned in the URL directive is unreachable. Assuming that this URL will be valid in the future, you can ignore this warning. 4.7.2.2. Checking the pello binary RPM for common errors When checking binary RPMs, the rpmlint command checks the following items: Documentation Manual pages Consistent use of the Filesystem Hierarchy Standard Inspect the outputs of the following example to learn how to check a pello binary RPM for common errors. Output of running the rpmlint command on the pello binary RPM The no-documentation and no-manual-page-for-binary warnings mean that the RPM has no documentation or manual pages because you did not provide any. The only-non-binary-in-usr-lib warning means that you provided only non-binary artifacts in the /usr/lib/ directory. This directory is typically used for shared object files, which are binary files. Therefore, rpmlint expects at least one or more files in /usr/lib/ to be binary. This is an example of an rpmlint check for compliance with Filesystem Hierarchy Standard. To ensure the correct placement of files, use RPM macros. For the sake of this example, you can safely ignore this warning. The non-executable-script error means that the /usr/lib/pello/pello.py file has no execute permissions. The rpmlint tool expects the file to be executable because the file contains the shebang ( #! ). For the purpose of this example, you can leave this file without execute permissions and ignore this error. Apart from the output warnings and errors, the RPM passed rpmlint checks. 4.7.3. Checking a sample C program for common errors In the following sections, investigate possible warnings and errors that can occur when validating RPM content on the example of the cello spec file and cello binary RPM. 4.7.3.1. Checking the cello spec file for common errors Inspect the outputs of the following examples to learn how to check a cello spec file for common errors. Output of running the rpmlint command on the cello spec file For cello.spec , there is only one invalid-url Source0 warning. This warning means that the URL listed in the Source0 directive is unreachable. This is expected because the specified example.com URL does not exist. Assuming that this URL will be valid in the future, you can ignore this warning. Output of running the rpmlint command on the cello SRPM For the cello SRPM, there is a new invalid-url URL warning. This warning means that the URL specified in the URL directive is unreachable. Assuming that this URL will be valid in the future, you can ignore this warning. 4.7.3.2. Checking the cello binary RPM for common errors When checking binary RPMs, the rpmlint command checks the following items: Documentation Manual pages Consistent use of the filesystem hierarchy standard Inspect the outputs of the following example to learn how to check a cello binary RPM for common errors. Output of running the rpmlint command on the cello binary RPM The no-documentation and no-manual-page-for-binary warnings mean that the RPM has no documentation or manual pages because you did not provide any. Apart from the output warnings, the RPM passed rpmlint checks. 4.8. Logging RPM activity to syslog You can log any RPM activity or transaction by using the System Logging protocol ( syslog ). Prerequisites The syslog plug-in is installed on the system: Note The default location for the syslog messages is the /var/log/messages file. However, you can configure syslog to use another location to store the messages. Procedure Open the file that you configured to store the syslog messages. Alternatively, if you use the default syslog configuration, open the /var/log/messages file. Search for new lines including the [RPM] string. 4.9. Extracting RPM content In some cases, for example, if a package required by RPM is damaged, you might need to extract the content of the package. In such cases, if an RPM installation is still working despite the damage, you can use the rpm2archive utility to convert an .rpm file to a tar archive to use the content of the package. Note If the RPM installation is severely damaged, you can use the rpm2cpio utility to convert the RPM package file to a cpio archive. Procedure Convert the RPM file to the tar archive: The resulting file has the .tgz suffix. For example, to create an archive from the bash package, enter:
|
[
"dnf install rpmdevtools",
"rpmdev-setuptree tree ~/rpmbuild/ /home/user/rpmbuild/ |-- BUILD |-- RPMS |-- SOURCES |-- SPECS `-- SRPMS 5 directories, 0 files",
"rpm -q bash bash-4.4.19-7.el8.x86_64",
"rpm --eval %{MACRO}",
"rpm --eval %{_bindir} /usr/bin rpm --eval %{_libexecdir} /usr/libexec",
"cd ~/rpmbuild/SPECS",
"rpmdev-newspec bello bello.spec created; type minimal, rpm version >= 4.11. rpmdev-newspec cello cello.spec created; type minimal, rpm version >= 4.11. rpmdev-newspec pello pello.spec created; type minimal, rpm version >= 4.11.",
"Name: bello Version: 0.1 Release: 1%{?dist} Summary: Hello World example implemented in bash script License: GPLv3+ URL: https://www.example.com/%{name} Source0: https://www.example.com/%{name}/releases/%{name}-%{version}.tar.gz Requires: bash BuildArch: noarch %description The long-tail description for our Hello World Example implemented in bash script. %prep %setup -q %build %install mkdir -p %{buildroot}/%{_bindir} install -m 0755 %{name} %{buildroot}/%{_bindir}/%{name} %files %license LICENSE %{_bindir}/%{name} %changelog * Tue May 31 2016 Adam Miller <[email protected]> - 0.1-1 - First bello package - Example second item in the changelog for version-release 0.1-1",
"%global python3_pkgversion 3.11 1 Name: python-pello 2 Version: 1.0.2 Release: 1%{?dist} Summary: Example Python library License: MIT URL: https://github.com/fedora-python/Pello Source: %{url}/archive/v%{version}/Pello-%{version}.tar.gz BuildArch: noarch BuildRequires: python%{python3_pkgversion}-devel 3 Build dependencies needed to be specified manually BuildRequires: python%{python3_pkgversion}-setuptools Test dependencies needed to be specified manually Also runtime dependencies need to be BuildRequired manually to run tests during build BuildRequires: python%{python3_pkgversion}-pytest >= 3 %global _description %{expand: Pello is an example package with an executable that prints Hello World! on the command line.} %description %_description %package -n python%{python3_pkgversion}-pello 4 Summary: %{summary} %description -n python%{python3_pkgversion}-pello %_description %prep %autosetup -p1 -n Pello-%{version} %build The macro only supported projects with setup.py %py3_build 5 %install The macro only supported projects with setup.py %py3_install %check 6 %{pytest} Note that there is no %%files section for the unversioned python module %files -n python%{python3_pkgversion}-pello %doc README.md %license LICENSE.txt %{_bindir}/pello_greeting The library files needed to be listed manually %{python3_sitelib}/pello/ The metadata files needed to be listed manually %{python3_sitelib}/Pello-*.egg-info/",
"Name: cello Version: 1.0 Release: 1%{?dist} Summary: Hello World example implemented in C License: GPLv3+ URL: https://www.example.com/%{name} Source0: https://www.example.com/%{name}/releases/%{name}-%{version}.tar.gz Patch0: cello-output-first-patch.patch BuildRequires: gcc BuildRequires: make %description The long-tail description for our Hello World Example implemented in C. %prep %setup -q %patch0 %build make %{?_smp_mflags} %install %make_install %files %license LICENSE %{_bindir}/%{name} %changelog * Tue May 31 2016 Adam Miller <[email protected]> - 1.0-1 - First cello package",
"dnf install rpm-build",
"cd ~/rpmbuild/SPECS/",
"rpmbuild -bs <specfile>",
"rpmbuild -bs bello.spec Wrote: /home/admiller/rpmbuild/SRPMS/bello-0.1-1.el8.src.rpm rpmbuild -bs pello.spec Wrote: /home/admiller/rpmbuild/SRPMS/pello-0.1.2-1.el8.src.rpm rpmbuild -bs cello.spec Wrote: /home/admiller/rpmbuild/SRPMS/cello-1.0-1.el8.src.rpm",
"dnf install rpm-build",
"cd ~/rpmbuild/SRPMS/",
"rpmbuild --rebuild <srpm>",
"rpmbuild --rebuild bello-0.1-1.el8.src.rpm [output truncated] rpmbuild --rebuild pello-0.1.2-1.el8.src.rpm [output truncated] rpmbuild --rebuild cello-1.0-1.el8.src.rpm [output truncated]",
"rpm -Uvh ~/rpmbuild/SRPMS/bello-0.1-1.el8.src.rpm Updating / installing... 1:bello-0.1-1.el8 [100%] rpm -Uvh ~/rpmbuild/SRPMS/pello-0.1.2-1.el8.src.rpm Updating / installing... ... 1:pello-0.1.2-1.el8 [100%] rpm -Uvh ~/rpmbuild/SRPMS/cello-1.0-1.el8.src.rpm Updating / installing... ... 1:cello-1.0-1.el8 [100%]",
"dnf install rpm-build",
"cd ~/rpmbuild/SPECS/",
"rpmbuild -bb <spec_file>",
"rpmbuild -bb bello.spec rpmbuild -bb pello.spec rpmbuild -bb cello.spec",
"rpmlint bello.spec bello.spec: W: invalid-url Source0 : https://www.example.com/bello/releases/bello-0.1.tar.gz HTTP Error 404: Not Found 0 packages and 1 specfiles checked; 0 errors, 1 warnings.",
"rpmlint ~/rpmbuild/SRPMS/bello-0.1-1.el8.src.rpm bello.src: W: invalid-url URL : https://www.example.com/bello HTTP Error 404: Not Found bello.src: W: invalid-url Source0: https://www.example.com/bello/releases/bello-0.1.tar.gz HTTP Error 404: Not Found 1 packages and 0 specfiles checked; 0 errors, 2 warnings.",
"rpmlint ~/rpmbuild/RPMS/noarch/bello-0.1-1.el8.noarch.rpm bello.noarch: W: invalid-url URL: https://www.example.com/bello HTTP Error 404: Not Found bello.noarch: W: no-documentation bello.noarch: W: no-manual-page-for-binary bello 1 packages and 0 specfiles checked; 0 errors, 3 warnings.",
"rpmlint pello.spec pello.spec:30: E: hardcoded-library-path in %{buildroot}/usr/lib/%{name} pello.spec:34: E: hardcoded-library-path in /usr/lib/%{name}/%{name}.pyc pello.spec:39: E: hardcoded-library-path in %{buildroot}/usr/lib/%{name}/ pello.spec:43: E: hardcoded-library-path in /usr/lib/%{name}/ pello.spec:45: E: hardcoded-library-path in /usr/lib/%{name}/%{name}.py* pello.spec: W: invalid-url Source0 : https://www.example.com/pello/releases/pello-0.1.2.tar.gz HTTP Error 404: Not Found 0 packages and 1 specfiles checked; 5 errors, 1 warnings.",
"rpmlint ~/rpmbuild/SRPMS/pello-0.1.2-1.el8.src.rpm pello.src: W: invalid-url URL : https://www.example.com/pello HTTP Error 404: Not Found pello.src:30: E: hardcoded-library-path in %{buildroot}/usr/lib/%{name} pello.src:34: E: hardcoded-library-path in /usr/lib/%{name}/%{name}.pyc pello.src:39: E: hardcoded-library-path in %{buildroot}/usr/lib/%{name}/ pello.src:43: E: hardcoded-library-path in /usr/lib/%{name}/ pello.src:45: E: hardcoded-library-path in /usr/lib/%{name}/%{name}.py* pello.src: W: invalid-url Source0: https://www.example.com/pello/releases/pello-0.1.2.tar.gz HTTP Error 404: Not Found 1 packages and 0 specfiles checked; 5 errors, 2 warnings.",
"rpmlint ~/rpmbuild/RPMS/noarch/pello-0.1.2-1.el8.noarch.rpm pello.noarch: W: invalid-url URL: https://www.example.com/pello HTTP Error 404: Not Found pello.noarch: W: only-non-binary-in-usr-lib pello.noarch: W: no-documentation pello.noarch: E: non-executable-script /usr/lib/pello/pello.py 0644L /usr/bin/env pello.noarch: W: no-manual-page-for-binary pello 1 packages and 0 specfiles checked; 1 errors, 4 warnings.",
"rpmlint ~/rpmbuild/SPECS/cello.spec /home/admiller/rpmbuild/SPECS/cello.spec: W: invalid-url Source0 : https://www.example.com/cello/releases/cello-1.0.tar.gz HTTP Error 404: Not Found 0 packages and 1 specfiles checked; 0 errors, 1 warnings.",
"rpmlint ~/rpmbuild/SRPMS/cello-1.0-1.el8.src.rpm cello.src: W: invalid-url URL : https://www.example.com/cello HTTP Error 404: Not Found cello.src: W: invalid-url Source0: https://www.example.com/cello/releases/cello-1.0.tar.gz HTTP Error 404: Not Found 1 packages and 0 specfiles checked; 0 errors, 2 warnings.",
"rpmlint ~/rpmbuild/RPMS/x86_64/cello-1.0-1.el8.x86_64.rpm cello.x86_64: W: invalid-url URL: https://www.example.com/cello HTTP Error 404: Not Found cello.x86_64: W: no-documentation cello.x86_64: W: no-manual-page-for-binary cello 1 packages and 0 specfiles checked; 0 errors, 3 warnings.",
"dnf install rpm-plugin-syslog",
"rpm2archive <filename>.rpm",
"rpm2archive bash-4.4.19-6.el8.x86_64.rpm ls bash-4.4.19-6.el8.x86_64.rpm.tgz bash-4.4.19-6.el8.x86_64.rpm.tgz"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/packaging_and_distributing_software/packaging-software_packaging-and-distributing-software
|
5.9.7.3. Managing Disk Quotas
|
5.9.7.3. Managing Disk Quotas There is little actual management required to support disk quotas under Red Hat Enterprise Linux. Essentially, all that is required is: Generating disk usage reports at regular intervals (and following up with users that seem to be having trouble effectively managing their allocated disk space) Making sure that the disk quotas remain accurate Creating a disk usage report entails running the repquota utility program. Using the command repquota /home produces this output: More information about repquota can be found in the System Administrators Guide , in the chapter on disk quotas. Whenever a file system is not unmounted cleanly (due to a system crash, for example), it is necessary to run quotacheck . However, many system administrators recommend running quotacheck on a regular basis, even if the system has not crashed. The process is similar to the initial use of quotacheck when enabling disk quotas. Here is an example quotacheck command: The easiest way to run quotacheck on a regular basis is to use cron . Most system administrators run quotacheck once a week, though there may be valid reasons to pick a longer or shorter interval, depending on your specific conditions.
|
[
"*** Report for user quotas on device /dev/md3 Block grace time: 7days; Inode grace time: 7days Block limits File limits User used soft hard grace used soft hard grace ---------------------------------------------------------------------- root -- 32836 0 0 4 0 0 matt -- 6618000 6900000 7000000 17397 0 0",
"quotacheck -avug"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/s3-storage-quotas-managing
|
20.28. Retrieving Guest Virtual Machine Information
|
20.28. Retrieving Guest Virtual Machine Information 20.28.1. Getting the Domain ID of a Guest Virtual Machine The virsh domid command returns the guest virtual machine's ID. Note that this changes each time the guest starts or restarts. This command requires either the name of the virtual machine or the virtual machine's UUID. Example 20.67. How to retrieve the domain ID for a guest virtual machine The following example retrieves the domain ID of a guest virtual machine named guest1 : Note, domid returns - for guest virtual machines that are in shut off state. To confirm that the virtual machine is shutoff, you can run the virsh list --all command. 20.28.2. Getting the Domain Name of a Guest Virtual Machine The virsh domname command returns the name of the guest virtual machine given its ID or UUID. Note that the ID changes each time the guest starts. Example 20.68. How to retrieve a virtual machine's ID The following example retrieves the name for the guest virtual machine whose ID is 8 : 20.28.3. Getting the UUID of a Guest Virtual Machine The virsh domuuid command returns the UUID or the Universally Unique Identifier for a given guest virtual machine or ID. Example 20.69. How to display the UUID for a guest virtual machine The following example retrieves the UUID for the guest virtual machine named guest1 : 20.28.4. Displaying Guest Virtual Machine Information The virsh dominfo command displays information on that guest virtual machine given a virtual machine's name, ID, or UUID. Note that the ID changes each time the virtual machine starts. Example 20.70. How to display guest virtual machine general details The following example displays the general details about the guest virtual machine named guest1 :
|
[
"virsh domid guest1 8",
"virsh domname 8 guest1",
"virsh domuuid guest1 r5b2-mySQL01 4a4c59a7-ee3f-c781-96e4-288f2862f011",
"virsh dominfo guest1 Id: 8 Name: guest1 UUID: 90e0d63e-d5c1-4735-91f6-20a32ca22c48 OS Type: hvm State: running CPU(s): 1 CPU time: 32.6s Max memory: 1048576 KiB Used memory: 1048576 KiB Persistent: yes Autostart: disable Managed save: no Security model: selinux Security DOI: 0 Security label: system_u:system_r:svirt_t:s0:c552,c818 (enforcing)"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-managing_guest_virtual_machines_with_virsh-retrieving_guest_virtual_machine_information
|
5.196. mod_auth_kerb
|
5.196. mod_auth_kerb 5.196.1. RHBA-2012:0877 - mod_auth_kerb bug fix and enhancement update Updated mod_auth_kerb packages that fix a bug and add an enhancement are now available for Red Hat Enterprise Linux 6. The mod_auth_kerb package provides a module for the Apache HTTP Server designed to provide Kerberos authentication over HTTP. The module supports the Negotiate authentication method, which performs full Kerberos authentication based on ticket exchanges. Bug Fix BZ# 688210 Due to a bug in the handling of memory lifetime when the module was configured to allow delegated credentials, the USDKRB5CCNAME variable was lost after the first request of an authenticated connection, causing web applications which relied on the presence of delegated credentials to fail. The memory lifetime handling has been fixed, allowing such web applications to access delegated credentials. Enhancement BZ# 767741 Support for "S4U2Proxy" constrained delegation has been added, which allows mod_auth_kerb to obtain credentials on behalf of an authenticated user. Users are advised to upgrade to these updated mod_auth_kerb packages, which fix this bug and add this enhancement.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/mod_auth_kerb
|
Securing Applications and Services Guide
|
Securing Applications and Services Guide Red Hat build of Keycloak 22.0 Red Hat Customer Content Services
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/22.0/html/securing_applications_and_services_guide/index
|
Chapter 58. Networking
|
Chapter 58. Networking Verification of signatures using the MD5 hash algorithm is disabled in Red Hat Enterprise Linux 7 It is impossible to connect to any Wi-Fi Protected Access (WPA) Enterprise Access Point (AP) that requires MD5 signed certificates. To work around this problem, copy the wpa_supplicant.service file from the /usr/lib/systemd/system/ directory to the /etc/systemd/system/ directory and add the following line to the Service section of the file: Then run the systemctl daemon-reload command as root to reload the service file. Important: Note that MD5 certificates are highly insecure and Red Hat does not recommend using them. (BZ#1062656)
|
[
"Environment=OPENSSL_ENABLE_MD5_VERIFY=1"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.3_release_notes/known_issues_networking
|
Registry
|
Registry OpenShift Container Platform 4.17 Configuring registries for OpenShift Container Platform Red Hat OpenShift Documentation Team
|
[
"podman login registry.redhat.io Username:<your_registry_account_username> Password:<your_registry_account_password>",
"podman pull registry.redhat.io/<repository_name>",
"topologySpreadConstraints: - labelSelector: matchLabels: docker-registry: default maxSkew: 1 topologyKey: kubernetes.io/hostname whenUnsatisfiable: DoNotSchedule - labelSelector: matchLabels: docker-registry: default maxSkew: 1 topologyKey: node-role.kubernetes.io/worker whenUnsatisfiable: DoNotSchedule - labelSelector: matchLabels: docker-registry: default maxSkew: 1 topologyKey: topology.kubernetes.io/zone whenUnsatisfiable: DoNotSchedule",
"topologySpreadConstraints: - labelSelector: matchLabels: docker-registry: default maxSkew: 1 topologyKey: kubernetes.io/hostname whenUnsatisfiable: DoNotSchedule - labelSelector: matchLabels: docker-registry: default maxSkew: 1 topologyKey: node-role.kubernetes.io/worker whenUnsatisfiable: DoNotSchedule",
"oc patch configs.imageregistry.operator.openshift.io/cluster --type merge -p '{\"spec\":{\"defaultRoute\":true}}'",
"apiVersion: v1 kind: ConfigMap metadata: name: my-registry-ca data: registry.example.com: | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- registry-with-port.example.com..5000: | 1 -----BEGIN CERTIFICATE----- -----END CERTIFICATE-----",
"oc create configmap registry-config --from-file=<external_registry_address>=ca.crt -n openshift-config",
"oc edit image.config.openshift.io cluster",
"spec: additionalTrustedCA: name: registry-config",
"oc create secret generic image-registry-private-configuration-user --from-literal=KEY1=value1 --from-literal=KEY2=value2 --namespace openshift-image-registry",
"oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=myaccesskey --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=mysecretkey --namespace openshift-image-registry",
"oc edit configs.imageregistry.operator.openshift.io/cluster",
"storage: s3: bucket: <bucket-name> region: <region-name>",
"regionEndpoint: http://rook-ceph-rgw-ocs-storagecluster-cephobjectstore.openshift-storage.svc.cluster.local",
"oc create secret generic image-registry-private-configuration-user --from-file=REGISTRY_STORAGE_GCS_KEYFILE=<path_to_keyfile> --namespace openshift-image-registry",
"oc edit configs.imageregistry.operator.openshift.io/cluster",
"storage: gcs: bucket: <bucket-name> projectID: <project-id> region: <region-name>",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"disableRedirect\":true}}'",
"oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_SWIFT_USERNAME=<username> --from-literal=REGISTRY_STORAGE_SWIFT_PASSWORD=<password> -n openshift-image-registry",
"oc edit configs.imageregistry.operator.openshift.io/cluster",
"storage: swift: container: <container-id>",
"oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_AZURE_ACCOUNTKEY=<accountkey> --namespace openshift-image-registry",
"oc edit configs.imageregistry.operator.openshift.io/cluster",
"storage: azure: accountName: <storage-account-name> container: <container-name>",
"oc edit configs.imageregistry.operator.openshift.io/cluster",
"storage: azure: accountName: <storage-account-name> container: <container-name> cloudName: AzureUSGovernmentCloud 1",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: custom-csi-storageclass provisioner: cinder.csi.openstack.org volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true parameters: availability: <availability_zone_name>",
"oc apply -f <storage_class_file_name>",
"storageclass.storage.k8s.io/custom-csi-storageclass created",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: csi-pvc-imageregistry namespace: openshift-image-registry 1 annotations: imageregistry.openshift.io: \"true\" spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 100Gi 2 storageClassName: <your_custom_storage_class> 3",
"oc apply -f <pvc_file_name>",
"persistentvolumeclaim/csi-pvc-imageregistry created",
"oc patch configs.imageregistry.operator.openshift.io/cluster --type 'json' -p='[{\"op\": \"replace\", \"path\": \"/spec/storage/pvc/claim\", \"value\": \"csi-pvc-imageregistry\"}]'",
"config.imageregistry.operator.openshift.io/cluster patched",
"oc get configs.imageregistry.operator.openshift.io/cluster -o yaml",
"status: managementState: Managed pvc: claim: csi-pvc-imageregistry",
"oc get pvc -n openshift-image-registry csi-pvc-imageregistry",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE csi-pvc-imageregistry Bound pvc-72a8f9c9-f462-11e8-b6b6-fa163e18b7b5 100Gi RWO custom-csi-storageclass 11m",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"managementState\":\"Managed\"}}'",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resources found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim:",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.17 True False False 6h50m",
"oc edit configs.imageregistry/cluster",
"managementState: Removed",
"managementState: Managed",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4",
"oc create -f pvc.yaml -n openshift-image-registry",
"oc edit config.imageregistry.operator.openshift.io -o yaml",
"storage: pvc: claim: 1",
"cat <<EOF | oc apply -f - apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: rgwbucket namespace: openshift-storage 1 spec: storageClassName: ocs-storagecluster-ceph-rgw generateBucketName: rgwbucket EOF",
"bucket_name=USD(oc get obc -n openshift-storage rgwbucket -o jsonpath='{.spec.bucketName}')",
"AWS_ACCESS_KEY_ID=USD(oc get secret -n openshift-storage rgwbucket -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 --decode)",
"AWS_SECRET_ACCESS_KEY=USD(oc get secret -n openshift-storage rgwbucket -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 --decode)",
"oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=USD{AWS_ACCESS_KEY_ID} --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=USD{AWS_SECRET_ACCESS_KEY} --namespace openshift-image-registry",
"route_host=USD(oc get route ocs-storagecluster-cephobjectstore -n openshift-storage --template='{{ .spec.host }}')",
"oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // \"router-certs-default\"' -r) -n openshift-ingress --confirm",
"oc create configmap image-registry-s3-bundle --from-file=ca-bundle.crt=./tls.crt -n openshift-config",
"oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"s3\":{\"bucket\":'\\\"USD{bucket_name}\\\"',\"region\":\"us-east-1\",\"regionEndpoint\":'\\\"https://USD{route_host}\\\"',\"virtualHostedStyle\":false,\"encrypt\":false,\"trustedCA\":{\"name\":\"image-registry-s3-bundle\"}}}}}' --type=merge",
"cat <<EOF | oc apply -f - apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: noobaatest namespace: openshift-storage 1 spec: storageClassName: openshift-storage.noobaa.io generateBucketName: noobaatest EOF",
"bucket_name=USD(oc get obc -n openshift-storage noobaatest -o jsonpath='{.spec.bucketName}')",
"AWS_ACCESS_KEY_ID=USD(oc get secret -n openshift-storage noobaatest -o yaml | grep -w \"AWS_ACCESS_KEY_ID:\" | head -n1 | awk '{print USD2}' | base64 --decode)",
"AWS_SECRET_ACCESS_KEY=USD(oc get secret -n openshift-storage noobaatest -o yaml | grep -w \"AWS_SECRET_ACCESS_KEY:\" | head -n1 | awk '{print USD2}' | base64 --decode)",
"oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=USD{AWS_ACCESS_KEY_ID} --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=USD{AWS_SECRET_ACCESS_KEY} --namespace openshift-image-registry",
"route_host=USD(oc get route s3 -n openshift-storage -o=jsonpath='{.spec.host}')",
"oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // \"router-certs-default\"' -r) -n openshift-ingress --confirm",
"oc create configmap image-registry-s3-bundle --from-file=ca-bundle.crt=./tls.crt -n openshift-config",
"oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"s3\":{\"bucket\":'\\\"USD{bucket_name}\\\"',\"region\":\"us-east-1\",\"regionEndpoint\":'\\\"https://USD{route_host}\\\"',\"virtualHostedStyle\":false,\"encrypt\":false,\"trustedCA\":{\"name\":\"image-registry-s3-bundle\"}}}}}' --type=merge",
"cat <<EOF | oc apply -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: registry-storage-pvc namespace: openshift-image-registry spec: accessModes: - ReadWriteMany resources: requests: storage: 100Gi storageClassName: ocs-storagecluster-cephfs EOF",
"oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"pvc\":{\"claim\":\"registry-storage-pvc\"}}}}' --type=merge",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"managementState\":\"Managed\"}}'",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resourses found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim: 1",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4",
"oc create -f pvc.yaml -n openshift-image-registry",
"oc edit config.imageregistry.operator.openshift.io -o yaml",
"storage: pvc: claim: 1",
"cat <<EOF | oc apply -f - apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: rgwbucket namespace: openshift-storage 1 spec: storageClassName: ocs-storagecluster-ceph-rgw generateBucketName: rgwbucket EOF",
"bucket_name=USD(oc get obc -n openshift-storage rgwbucket -o jsonpath='{.spec.bucketName}')",
"AWS_ACCESS_KEY_ID=USD(oc get secret -n openshift-storage rgwbucket -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 --decode)",
"AWS_SECRET_ACCESS_KEY=USD(oc get secret -n openshift-storage rgwbucket -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 --decode)",
"oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=USD{AWS_ACCESS_KEY_ID} --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=USD{AWS_SECRET_ACCESS_KEY} --namespace openshift-image-registry",
"route_host=USD(oc get route ocs-storagecluster-cephobjectstore -n openshift-storage --template='{{ .spec.host }}')",
"oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // \"router-certs-default\"' -r) -n openshift-ingress --confirm",
"oc create configmap image-registry-s3-bundle --from-file=ca-bundle.crt=./tls.crt -n openshift-config",
"oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"s3\":{\"bucket\":'\\\"USD{bucket_name}\\\"',\"region\":\"us-east-1\",\"regionEndpoint\":'\\\"https://USD{route_host}\\\"',\"virtualHostedStyle\":false,\"encrypt\":false,\"trustedCA\":{\"name\":\"image-registry-s3-bundle\"}}}}}' --type=merge",
"cat <<EOF | oc apply -f - apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: noobaatest namespace: openshift-storage 1 spec: storageClassName: openshift-storage.noobaa.io generateBucketName: noobaatest EOF",
"bucket_name=USD(oc get obc -n openshift-storage noobaatest -o jsonpath='{.spec.bucketName}')",
"AWS_ACCESS_KEY_ID=USD(oc get secret -n openshift-storage noobaatest -o yaml | grep -w \"AWS_ACCESS_KEY_ID:\" | head -n1 | awk '{print USD2}' | base64 --decode)",
"AWS_SECRET_ACCESS_KEY=USD(oc get secret -n openshift-storage noobaatest -o yaml | grep -w \"AWS_SECRET_ACCESS_KEY:\" | head -n1 | awk '{print USD2}' | base64 --decode)",
"oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=USD{AWS_ACCESS_KEY_ID} --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=USD{AWS_SECRET_ACCESS_KEY} --namespace openshift-image-registry",
"route_host=USD(oc get route s3 -n openshift-storage -o=jsonpath='{.spec.host}')",
"oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // \"router-certs-default\"' -r) -n openshift-ingress --confirm",
"oc create configmap image-registry-s3-bundle --from-file=ca-bundle.crt=./tls.crt -n openshift-config",
"oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"s3\":{\"bucket\":'\\\"USD{bucket_name}\\\"',\"region\":\"us-east-1\",\"regionEndpoint\":'\\\"https://USD{route_host}\\\"',\"virtualHostedStyle\":false,\"encrypt\":false,\"trustedCA\":{\"name\":\"image-registry-s3-bundle\"}}}}}' --type=merge",
"cat <<EOF | oc apply -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: registry-storage-pvc namespace: openshift-image-registry spec: accessModes: - ReadWriteMany resources: requests: storage: 100Gi storageClassName: ocs-storagecluster-cephfs EOF",
"oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"pvc\":{\"claim\":\"registry-storage-pvc\"}}}}' --type=merge",
"cat <<EOF | oc apply -f - apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: rgwbucket namespace: openshift-storage 1 spec: storageClassName: ocs-storagecluster-ceph-rgw generateBucketName: rgwbucket EOF",
"bucket_name=USD(oc get obc -n openshift-storage rgwbucket -o jsonpath='{.spec.bucketName}')",
"AWS_ACCESS_KEY_ID=USD(oc get secret -n openshift-storage rgwbucket -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 --decode)",
"AWS_SECRET_ACCESS_KEY=USD(oc get secret -n openshift-storage rgwbucket -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 --decode)",
"oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=USD{AWS_ACCESS_KEY_ID} --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=USD{AWS_SECRET_ACCESS_KEY} --namespace openshift-image-registry",
"route_host=USD(oc get route ocs-storagecluster-cephobjectstore -n openshift-storage --template='{{ .spec.host }}')",
"oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // \"router-certs-default\"' -r) -n openshift-ingress --confirm",
"oc create configmap image-registry-s3-bundle --from-file=ca-bundle.crt=./tls.crt -n openshift-config",
"oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"s3\":{\"bucket\":'\\\"USD{bucket_name}\\\"',\"region\":\"us-east-1\",\"regionEndpoint\":'\\\"https://USD{route_host}\\\"',\"virtualHostedStyle\":false,\"encrypt\":false,\"trustedCA\":{\"name\":\"image-registry-s3-bundle\"}}}}}' --type=merge",
"cat <<EOF | oc apply -f - apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: noobaatest namespace: openshift-storage 1 spec: storageClassName: openshift-storage.noobaa.io generateBucketName: noobaatest EOF",
"bucket_name=USD(oc get obc -n openshift-storage noobaatest -o jsonpath='{.spec.bucketName}')",
"AWS_ACCESS_KEY_ID=USD(oc get secret -n openshift-storage noobaatest -o yaml | grep -w \"AWS_ACCESS_KEY_ID:\" | head -n1 | awk '{print USD2}' | base64 --decode)",
"AWS_SECRET_ACCESS_KEY=USD(oc get secret -n openshift-storage noobaatest -o yaml | grep -w \"AWS_SECRET_ACCESS_KEY:\" | head -n1 | awk '{print USD2}' | base64 --decode)",
"oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=USD{AWS_ACCESS_KEY_ID} --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=USD{AWS_SECRET_ACCESS_KEY} --namespace openshift-image-registry",
"route_host=USD(oc get route s3 -n openshift-storage -o=jsonpath='{.spec.host}')",
"oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // \"router-certs-default\"' -r) -n openshift-ingress --confirm",
"oc create configmap image-registry-s3-bundle --from-file=ca-bundle.crt=./tls.crt -n openshift-config",
"oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"s3\":{\"bucket\":'\\\"USD{bucket_name}\\\"',\"region\":\"us-east-1\",\"regionEndpoint\":'\\\"https://USD{route_host}\\\"',\"virtualHostedStyle\":false,\"encrypt\":false,\"trustedCA\":{\"name\":\"image-registry-s3-bundle\"}}}}}' --type=merge",
"cat <<EOF | oc apply -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: registry-storage-pvc namespace: openshift-image-registry spec: accessModes: - ReadWriteMany resources: requests: storage: 100Gi storageClassName: ocs-storagecluster-cephfs EOF",
"oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"pvc\":{\"claim\":\"registry-storage-pvc\"}}}}' --type=merge",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"managementState\":\"Managed\"}}'",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resourses found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim: 1",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.13 True False False 6h50m",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4",
"oc create -f pvc.yaml -n openshift-image-registry",
"oc edit config.imageregistry.operator.openshift.io -o yaml",
"storage: pvc: claim: 1",
"cat <<EOF | oc apply -f - apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: rgwbucket namespace: openshift-storage 1 spec: storageClassName: ocs-storagecluster-ceph-rgw generateBucketName: rgwbucket EOF",
"bucket_name=USD(oc get obc -n openshift-storage rgwbucket -o jsonpath='{.spec.bucketName}')",
"AWS_ACCESS_KEY_ID=USD(oc get secret -n openshift-storage rgwbucket -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 --decode)",
"AWS_SECRET_ACCESS_KEY=USD(oc get secret -n openshift-storage rgwbucket -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 --decode)",
"oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=USD{AWS_ACCESS_KEY_ID} --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=USD{AWS_SECRET_ACCESS_KEY} --namespace openshift-image-registry",
"route_host=USD(oc get route ocs-storagecluster-cephobjectstore -n openshift-storage --template='{{ .spec.host }}')",
"oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // \"router-certs-default\"' -r) -n openshift-ingress --confirm",
"oc create configmap image-registry-s3-bundle --from-file=ca-bundle.crt=./tls.crt -n openshift-config",
"oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"s3\":{\"bucket\":'\\\"USD{bucket_name}\\\"',\"region\":\"us-east-1\",\"regionEndpoint\":'\\\"https://USD{route_host}\\\"',\"virtualHostedStyle\":false,\"encrypt\":false,\"trustedCA\":{\"name\":\"image-registry-s3-bundle\"}}}}}' --type=merge",
"cat <<EOF | oc apply -f - apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: noobaatest namespace: openshift-storage 1 spec: storageClassName: openshift-storage.noobaa.io generateBucketName: noobaatest EOF",
"bucket_name=USD(oc get obc -n openshift-storage noobaatest -o jsonpath='{.spec.bucketName}')",
"AWS_ACCESS_KEY_ID=USD(oc get secret -n openshift-storage noobaatest -o yaml | grep -w \"AWS_ACCESS_KEY_ID:\" | head -n1 | awk '{print USD2}' | base64 --decode)",
"AWS_SECRET_ACCESS_KEY=USD(oc get secret -n openshift-storage noobaatest -o yaml | grep -w \"AWS_SECRET_ACCESS_KEY:\" | head -n1 | awk '{print USD2}' | base64 --decode)",
"oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=USD{AWS_ACCESS_KEY_ID} --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=USD{AWS_SECRET_ACCESS_KEY} --namespace openshift-image-registry",
"route_host=USD(oc get route s3 -n openshift-storage -o=jsonpath='{.spec.host}')",
"oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // \"router-certs-default\"' -r) -n openshift-ingress --confirm",
"oc create configmap image-registry-s3-bundle --from-file=ca-bundle.crt=./tls.crt -n openshift-config",
"oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"s3\":{\"bucket\":'\\\"USD{bucket_name}\\\"',\"region\":\"us-east-1\",\"regionEndpoint\":'\\\"https://USD{route_host}\\\"',\"virtualHostedStyle\":false,\"encrypt\":false,\"trustedCA\":{\"name\":\"image-registry-s3-bundle\"}}}}}' --type=merge",
"cat <<EOF | oc apply -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: registry-storage-pvc namespace: openshift-image-registry spec: accessModes: - ReadWriteMany resources: requests: storage: 100Gi storageClassName: ocs-storagecluster-cephfs EOF",
"oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"pvc\":{\"claim\":\"registry-storage-pvc\"}}}}' --type=merge",
"oc policy add-role-to-user registry-viewer <user_name>",
"oc policy add-role-to-user registry-editor <user_name>",
"oc get nodes",
"oc debug nodes/<node_name>",
"sh-4.2# chroot /host",
"sh-4.2# oc login -u kubeadmin -p <password_from_install_log> https://api-int.<cluster_name>.<base_domain>:6443",
"sh-4.2# podman login -u kubeadmin -p USD(oc whoami -t) image-registry.openshift-image-registry.svc:5000",
"Login Succeeded!",
"sh-4.2# podman pull <name.io>/<image>",
"sh-4.2# podman tag <name.io>/<image> image-registry.openshift-image-registry.svc:5000/openshift/<image>",
"sh-4.2# podman push image-registry.openshift-image-registry.svc:5000/openshift/<image>",
"oc get pods -n openshift-image-registry",
"NAME READY STATUS RESTARTS AGE image-registry-79fb4469f6-llrln 1/1 Running 0 77m node-ca-hjksc 1/1 Running 0 73m node-ca-tftj6 1/1 Running 0 77m node-ca-wb6ht 1/1 Running 0 77m node-ca-zvt9q 1/1 Running 0 74m",
"oc logs deployments/image-registry -n openshift-image-registry",
"2015-05-01T19:48:36.300593110Z time=\"2015-05-01T19:48:36Z\" level=info msg=\"version=v2.0.0+unknown\" 2015-05-01T19:48:36.303294724Z time=\"2015-05-01T19:48:36Z\" level=info msg=\"redis not configured\" instance.id=9ed6c43d-23ee-453f-9a4b-031fea646002 2015-05-01T19:48:36.303422845Z time=\"2015-05-01T19:48:36Z\" level=info msg=\"using inmemory layerinfo cache\" instance.id=9ed6c43d-23ee-453f-9a4b-031fea646002 2015-05-01T19:48:36.303433991Z time=\"2015-05-01T19:48:36Z\" level=info msg=\"Using OpenShift Auth handler\" 2015-05-01T19:48:36.303439084Z time=\"2015-05-01T19:48:36Z\" level=info msg=\"listening on :5000\" instance.id=9ed6c43d-23ee-453f-9a4b-031fea646002",
"cat <<EOF | oc create -f - apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: prometheus-scraper rules: - apiGroups: - image.openshift.io resources: - registry/metrics verbs: - get EOF",
"oc adm policy add-cluster-role-to-user prometheus-scraper <username>",
"openshift: oc whoami -t",
"curl --insecure -s -u <user>:<secret> \\ 1 https://image-registry.openshift-image-registry.svc:5000/extensions/v2/metrics | grep imageregistry | head -n 20",
"HELP imageregistry_build_info A metric with a constant '1' value labeled by major, minor, git commit & git version from which the image registry was built. TYPE imageregistry_build_info gauge imageregistry_build_info{gitCommit=\"9f72191\",gitVersion=\"v3.11.0+9f72191-135-dirty\",major=\"3\",minor=\"11+\"} 1 HELP imageregistry_digest_cache_requests_total Total number of requests without scope to the digest cache. TYPE imageregistry_digest_cache_requests_total counter imageregistry_digest_cache_requests_total{type=\"Hit\"} 5 imageregistry_digest_cache_requests_total{type=\"Miss\"} 24 HELP imageregistry_digest_cache_scoped_requests_total Total number of scoped requests to the digest cache. TYPE imageregistry_digest_cache_scoped_requests_total counter imageregistry_digest_cache_scoped_requests_total{type=\"Hit\"} 33 imageregistry_digest_cache_scoped_requests_total{type=\"Miss\"} 44 HELP imageregistry_http_in_flight_requests A gauge of requests currently being served by the registry. TYPE imageregistry_http_in_flight_requests gauge imageregistry_http_in_flight_requests 1 HELP imageregistry_http_request_duration_seconds A histogram of latencies for requests to the registry. TYPE imageregistry_http_request_duration_seconds summary imageregistry_http_request_duration_seconds{method=\"get\",quantile=\"0.5\"} 0.01296087 imageregistry_http_request_duration_seconds{method=\"get\",quantile=\"0.9\"} 0.014847248 imageregistry_http_request_duration_seconds{method=\"get\",quantile=\"0.99\"} 0.015981195 imageregistry_http_request_duration_seconds_sum{method=\"get\"} 12.260727916000022",
"oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{\"spec\":{\"defaultRoute\":true}}' --type=merge",
"HOST=USD(oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}')",
"oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // \"router-certs-default\"' -r) -n openshift-ingress --confirm",
"sudo mv tls.crt /etc/pki/ca-trust/source/anchors/",
"sudo update-ca-trust enable",
"sudo podman login -u kubeadmin -p USD(oc whoami -t) USDHOST",
"oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{\"spec\":{\"defaultRoute\":true}}' --type=merge",
"HOST=USD(oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}')",
"podman login -u kubeadmin -p USD(oc whoami -t) --tls-verify=false USDHOST 1",
"oc create secret tls public-route-tls -n openshift-image-registry --cert=</path/to/tls.crt> --key=</path/to/tls.key>",
"oc edit configs.imageregistry.operator.openshift.io/cluster",
"spec: routes: - name: public-routes hostname: myregistry.mycorp.organization secretName: public-route-tls"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html-single/registry/index
|
6.3. Performing a Driver Update During Installation
|
6.3. Performing a Driver Update During Installation You can perform a driver update during installation in the following ways: let the installer automatically find a driver update disk. let the installer prompt you for a driver update. use a boot option to specify a driver update disk. 6.3.1. Let the Installer Find a Driver Update Disk Automatically Attach a block device with the filesystem label OEMDRV before starting the installation process. The installer will automatically examine the device and load any driver updates that it detects and will not prompt you during the process. Refer to Section 6.2.1.1, "Preparing to use an image file on local storage" to prepare a storage device for the installer to find.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/sect-Performing_a_driver_update_during_installation-x86
|
16.9.2. Installation
|
16.9.2. Installation To install virt-inspector and the documentation, enter the following command: To process Windows guest virtual machines you must also install libguestfs-winsupport . Refer to Section 16.10.2, "Installation" for details. The documentation, including example XML output and a Relax-NG schema for the output, will be installed in /usr/share/doc/libguestfs-devel-*/ where "*" is replaced by the version number of libguestfs.
|
[
"yum install libguestfs-tools libguestfs-devel"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sect-virt-inspector-install
|
Chapter 1. Introduction to SELinux
|
Chapter 1. Introduction to SELinux SELinux provides enhanced security by enforcing security policies, using labels for files, processes and ports, and logging unauthorized access attempts. SELinux is enabled and set to enforcing mode on RHEL 9 by default and security policies for system processes are maintained by Red Hat. For more information, refer to Changing SELinux states and modes on RHEL. You can refer to SAP Note 3108302 - SAP HANA DB: Recommended OS Settings for RHEL 9 , to know which HANA versions have been tested by SAP with SELinux set to enforcing and unconfined mode. Red Hat recommends that you use SELinux in enforcing mode to configure your RHEL systems running on SAP HANA. This document describes the necessary configuration changes that you must make. In case you come across SELinux related issues while testing or running your SAP HANA system, SAP reserves the right to disable SELinux. However, most of the problems can be solved by changing SELinux mode from enforcing to permissive . The advantage is that your system is still operating while you analyze and solve the problem.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/9/html/using_selinux_for_sap_hana/asmb_intro_using-selinux
|
12.2.4. Deleting a Storage Pool Using virsh
|
12.2.4. Deleting a Storage Pool Using virsh To avoid any issues with other guest virtual machines using the same pool, it is best to stop the storage pool and release any resources in use by it. Optionally, if you want to remove the directory where the storage pool resides use the following command: Remove the storage pool's definition
|
[
"virsh pool-destroy guest_images_disk",
"virsh pool-delete guest_images_disk",
"virsh pool-undefine guest_images_disk"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/delete-ded-part-storage-pool-virsh
|
19.3. Sub-Collections
|
19.3. Sub-Collections 19.3.1. Domain Users Sub-Collection The users sub-collection contains all users in the directory service. This information is used to add new users to the Red Hat Virtualization environment. Table 19.2. Domain user elements Element Type Description name string The name of the user. last_name string The surname of the user. user_name string The user name from directory service. domain id GUID The containing directory service domain. groups complex A list of directory service groups for this user. Example 19.2. An XML representation of a user in the users sub-collection 19.3.2. Domain Groups Sub-Collection The groups sub-collection contains all groups in the directory service. A domain group resource contains a set of elements. Table 19.3. Domain group elements Element Type Description name string The name of the group. domain id GUID The containing directory service domain. Example 19.3. An XML representation of a group in the groups sub-collection
|
[
"<user id=\"225f15cd-e891-434d-8262-a66808fcb9b1\" href=\"/ovirt-engine/api/domains/77696e32-6b38-7268-6576-2e656e676c61/users/ d3b4e7be-5f57-4dac-b937-21e1771a501f\"> <name>RHEV-M Admin</name> <user_name>[email protected]</user_name> <domain id=\"77696e32-6b38-7268-6576-2e656e676c61\" href=\"/ovirt-engine/api/domains/77696e32-6b38-7268-6576-2e656e676c61\"/> <groups> <group> <name>domain.example.com/Users/Enterprise Admins</name> </group> <group> <name>domain.example.com/Users/Domain Admins</name> </group> </groups> </user>",
"<group id=\"85bf8d97-273c-4a5c-b801-b17d58330dab\" href=\"/ovirt-engine/api/domains/77696e32-6b38-7268-6576-2e656e676c61/groups/ 85bf8d97-273c-4a5c-b801-b17d58330dab\"> <name>example.com/Users/Enterprise Admins</name> <domain id=\"77696e32-6b38-7268-6576-2e656e676c61\" href=\"/ovirt-engine/api/domains/77696e32-6b38-7268-6576-2e656e676c61\"/> </group>"
] |
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/version_3_rest_api_guide/sect-sub-collections6
|
Chapter 4. Specifics of Individual Software Collections
|
Chapter 4. Specifics of Individual Software Collections This chapter is focused on the specifics of certain Software Collections and provides additional details concerning these components. 4.1. Red Hat Developer Toolset Red Hat Developer Toolset is designed for developers working on the Red Hat Enterprise Linux platform. Red Hat Developer Toolset provides current versions of the GNU Compiler Collection , GNU Debugger , and other development, debugging, and performance monitoring tools. Similarly to other Software Collections, an additional set of tools is installed into the /opt/ directory. These tools are enabled by the user on demand using the supplied scl utility. Similarly to other Software Collections, these do not replace the Red Hat Enterprise Linux system versions of these tools, nor will they be used in preference to those system versions unless explicitly invoked using the scl utility. For an overview of features, refer to the Features section of the Red Hat Developer Toolset Release Notes . For detailed information regarding usage and changes in 12.1, see the Red Hat Developer Toolset User Guide . 4.2. Maven The rh-maven36 Software Collection, available only for Red Hat Enterprise Linux 7, provides a software project management and comprehension tool. Based on the concept of a project object model (POM), Maven can manage a project's build, reporting, and documentation from a central piece of information. To install the rh-maven36 Collection, type the following command as root : yum install rh-maven36 To enable this collection, type the following command at a shell prompt: scl enable rh-maven36 bash Global Maven settings, such as remote repositories or mirrors, can be customized by editing the /opt/rh/rh-maven36/root/etc/maven/settings.xml file. For more information about using Maven, refer to the Maven documentation . Usage of plug-ins is described in this section ; to find documentation regarding individual plug-ins, see the index of plug-ins . 4.3. Database Connectors Database connector packages provide the database client functionality, which is necessary for local or remote connection to a database server. Table 4.1, "Interoperability Between Languages and Databases" lists Software Collections with language runtimes that include connectors for certain database servers: yes - the combination is supported no - the combination is not supported Table 4.1. Interoperability Between Languages and Databases Database Language (Software Collection) MariaDB MongoDB MySQL PostgreSQL Redis SQLite3 rh-nodejs4 no no no no no no rh-nodejs6 no no no no no no rh-nodejs8 no no no no no no rh-nodejs10 no no no no no no rh-nodejs12 no no no no no no rh-nodejs14 no no no no no no rh-perl520 yes no yes yes no no rh-perl524 yes no yes yes no no rh-perl526 yes no yes yes no no rh-perl530 yes no yes yes no yes rh-php56 yes yes yes yes no yes rh-php70 yes no yes yes no yes rh-php71 yes no yes yes no yes rh-php72 yes no yes yes no yes rh-php73 yes no yes yes no yes python27 yes yes yes yes no yes rh-python34 no yes no yes no yes rh-python35 yes yes yes yes no yes rh-python36 yes yes yes yes no yes rh-python38 yes no yes yes no yes rh-ror41 yes yes yes yes no yes rh-ror42 yes yes yes yes no yes rh-ror50 yes yes yes yes no yes rh-ruby25 yes yes yes yes no no rh-ruby26 yes yes yes yes no no rh-ruby27 yes yes yes yes no no rh-ruby30 yes no yes yes no yes
| null |
https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/3.8_release_notes/chap-individual_collections
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.