title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
listlengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
8.106. logwatch
|
8.106. logwatch 8.106.1. RHBA-2013:1247 - logwatch bug fix update An updated logwatch package that fixes several bugs is now available for Red Hat Enterprise Linux 6. Logwatch is a customizable, pluggable log-monitoring system. It will go through the user's logs for a given period of time and make a report in the areas that the user needs. Bug Fixes BZ# 737247 Previously, logwatch did not correctly parse the up2date service's "updateLoginInfo() login info" messages and displayed them as unmatched entries. With this update, parsing of such log messages has been fixed and works as expected. BZ# 799690 Prior to this update, logwatch did not correctly parse many Openswan log messages and displayed them as unmatched entries. With this update, parsing of such log messages has been fixed and works as expected. BZ#799987 Logwatch did not parse Dovecot 2.x log messages properly. That resulted in a lot of unmatched entries in its reports. This patch adds additional logic to correctly parse Dovecot 2.x logs, thus unmatched entries related to Dovecot 2.x messages no longer appear. BZ# 800843 The .hdr files are headers for RPM packages; they are essentially metadata. Logwatch's HTTP service parser emitted warnings for the .hdr files, even when the "Detail" parameter was set to "Low". With this update, the .hdr files are now parsed as archives, which removes spurious warnings about the .hdr files. BZ#837034 Previously, logwatch did not correctly handle the "MailTo" option in its configuration. That resulted in no output, even though a report should have been displayed. This patch adds additional logic to correctly handle an empty "MailTo" option. As a result, output is correctly produced even when this option is empty. BZ# 888007 Prior to this update, logwatch did not correctly parse many smartd log messages and displayed them as unmatched entries. With this update, parsing of such log messages has been fixed and works as expected. BZ#894134 Prior to this update, logwatch did not correctly parse DNS log messages with DNSSEC validation enabled and displayed them as unmatched entries. With this update, parsing of such log messages has been fixed and works as expected. BZ# 894185 Previously, logwatch did not correctly parse the postfix service's "improper command pipelining" messages and displayed them as unmatched entries. With this update, parsing of such log messages has been fixed and works as expected. BZ# 894191 Previously, logwatch did not correctly parse user names in the secure log. It improperly assumed that such names are composed of letters only and displayed messages containing names with other symbols, such as digits, as unmatched entries. With this update, parsing of user names has been enhanced to include underscores and digits, thus log messages containing such user names no longer display as unmatched entries. BZ#974042 Logins initiated with the "su -" or "su -l" command were not correctly parsed by logwatch and were displayed as unmatched entries. This update fixes this bug. BZ#974044 Prior to this update, logwatch did not correctly parse the RSYSLOG_FileFormat time stamps and displayed them as unmatched entries. With this update, parsing of the rsyslog time stamps has been fixed and works as expected. BZ#974046 SSH Kerberos (GSS) logins were not correctly parsed by logwatch and were displayed as unmatched entries. This update fixes this bug. BZ#974047 Xen virtual console logins were not correctly parsed by logwatch and were displayed as unmatched entries. This update fixes this bug. Users of logwatch are advised to upgrade to this updated package, which fixes these bugs.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/logwatch
|
Chapter 11. Installing a cluster on user-provisioned infrastructure in GCP by using Deployment Manager templates
|
Chapter 11. Installing a cluster on user-provisioned infrastructure in GCP by using Deployment Manager templates In OpenShift Container Platform version 4.12, you can install a cluster on Google Cloud Platform (GCP) that uses infrastructure that you provide. The steps for performing a user-provided infrastructure install are outlined here. Several Deployment Manager templates are provided to assist in completing these steps or to help model your own. You are also free to create the required resources through other methods. Important The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the cloud provider and the installation process of OpenShift Container Platform. Several Deployment Manager templates are provided to assist in completing these steps or to help model your own. You are also free to create the required resources through other methods; the templates are just an example. 11.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . If you use a firewall and plan to use the Telemetry service, you configured the firewall to allow the sites that your cluster requires access to. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain IAM credentials . Note Be sure to also review this site list if you are configuring a proxy. 11.2. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 11.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.12, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 11.4. Configuring your GCP project Before you can install OpenShift Container Platform, you must configure a Google Cloud Platform (GCP) project to host it. 11.4.1. Creating a GCP project To install OpenShift Container Platform, you must create a project in your Google Cloud Platform (GCP) account to host the cluster. Procedure Create a project to host your OpenShift Container Platform cluster. See Creating and Managing Projects in the GCP documentation. Important Your GCP project must use the Premium Network Service Tier if you are using installer-provisioned infrastructure. The Standard Network Service Tier is not supported for clusters installed using the installation program. The installation program configures internal load balancing for the api-int.<cluster_name>.<base_domain> URL; the Premium Tier is required for internal load balancing. 11.4.2. Enabling API services in GCP Your Google Cloud Platform (GCP) project requires access to several API services to complete OpenShift Container Platform installation. Prerequisites You created a project to host your cluster. Procedure Enable the following required API services in the project that hosts your cluster. You may also enable optional API services which are not required for installation. See Enabling services in the GCP documentation. Table 11.1. Required API services API service Console service name Compute Engine API compute.googleapis.com Cloud Resource Manager API cloudresourcemanager.googleapis.com Google DNS API dns.googleapis.com IAM Service Account Credentials API iamcredentials.googleapis.com Identity and Access Management (IAM) API iam.googleapis.com Service Usage API serviceusage.googleapis.com Table 11.2. Optional API services API service Console service name Cloud Deployment Manager V2 API deploymentmanager.googleapis.com Google Cloud APIs cloudapis.googleapis.com Service Management API servicemanagement.googleapis.com Google Cloud Storage JSON API storage-api.googleapis.com Cloud Storage storage-component.googleapis.com 11.4.3. Configuring DNS for GCP To install OpenShift Container Platform, the Google Cloud Platform (GCP) account you use must have a dedicated public hosted zone in the same project that you host the OpenShift Container Platform cluster. This zone must be authoritative for the domain. The DNS service provides cluster DNS resolution and name lookup for external connections to the cluster. Procedure Identify your domain, or subdomain, and registrar. You can transfer an existing domain and registrar or obtain a new one through GCP or another source. Note If you purchase a new domain, it can take time for the relevant DNS changes to propagate. For more information about purchasing domains through Google, see Google Domains . Create a public hosted zone for your domain or subdomain in your GCP project. See Creating public zones in the GCP documentation. Use an appropriate root domain, such as openshiftcorp.com , or subdomain, such as clusters.openshiftcorp.com . Extract the new authoritative name servers from the hosted zone records. See Look up your Cloud DNS name servers in the GCP documentation. You typically have four name servers. Update the registrar records for the name servers that your domain uses. For example, if you registered your domain to Google Domains, see the following topic in the Google Domains Help: How to switch to custom name servers . If you migrated your root domain to Google Cloud DNS, migrate your DNS records. See Migrating to Cloud DNS in the GCP documentation. If you use a subdomain, follow your company's procedures to add its delegation records to the parent domain. This process might include a request to your company's IT department or the division that controls the root domain and DNS services for your company. 11.4.4. GCP account limits The OpenShift Container Platform cluster uses a number of Google Cloud Platform (GCP) components, but the default Quotas do not affect your ability to install a default OpenShift Container Platform cluster. A default cluster, which contains three compute and three control plane machines, uses the following resources. Note that some resources are required only during the bootstrap process and are removed after the cluster deploys. Table 11.3. GCP resources used in a default cluster Service Component Location Total resources required Resources removed after bootstrap Service account IAM Global 6 1 Firewall rules Networking Global 11 1 Forwarding rules Compute Global 2 0 Health checks Compute Global 2 0 Images Compute Global 1 0 Networks Networking Global 1 0 Routers Networking Global 1 0 Routes Networking Global 2 0 Subnetworks Compute Global 2 0 Target pools Networking Global 2 0 Note If any of the quotas are insufficient during installation, the installation program displays an error that states both which quota was exceeded and the region. Be sure to consider your actual cluster size, planned cluster growth, and any usage from other clusters that are associated with your account. The CPU, static IP addresses, and persistent disk SSD (storage) quotas are the ones that are most likely to be insufficient. If you plan to deploy your cluster in one of the following regions, you will exceed the maximum storage quota and are likely to exceed the CPU quota limit: asia-east2 asia-northeast2 asia-south1 australia-southeast1 europe-north1 europe-west2 europe-west3 europe-west6 northamerica-northeast1 southamerica-east1 us-west2 You can increase resource quotas from the GCP console , but you might need to file a support ticket. Be sure to plan your cluster size early so that you can allow time to resolve the support ticket before you install your OpenShift Container Platform cluster. 11.4.5. Creating a service account in GCP OpenShift Container Platform requires a Google Cloud Platform (GCP) service account that provides authentication and authorization to access data in the Google APIs. If you do not have an existing IAM service account that contains the required roles in your project, you must create one. Prerequisites You created a project to host your cluster. Procedure Create a service account in the project that you use to host your OpenShift Container Platform cluster. See Creating a service account in the GCP documentation. Grant the service account the appropriate permissions. You can either grant the individual permissions that follow or assign the Owner role to it. See Granting roles to a service account for specific resources . Note While making the service account an owner of the project is the easiest way to gain the required permissions, it means that service account has complete control over the project. You must determine if the risk that comes from offering that power is acceptable. You can create the service account key in JSON format, or attach the service account to a GCP virtual machine. See Creating service account keys and Creating and enabling service accounts for instances in the GCP documentation. You must have a service account key or a virtual machine with an attached service account to create the cluster. Note If you use a virtual machine with an attached service account to create your cluster, you must set credentialsMode: Manual in the install-config.yaml file before installation. 11.4.6. Required GCP roles When you attach the Owner role to the service account that you create, you grant that service account all permissions, including those that are required to install OpenShift Container Platform. If the security policies for your organization require a more restrictive set of permissions, you can create a service account with the following permissions. Important If you configure the Cloud Credential Operator to operate in passthrough mode, you must use roles rather than granular permissions. If you deploy your cluster into an existing virtual private cloud (VPC), the service account does not require certain networking permissions, which are noted in the following lists: Required roles for the installation program Compute Admin IAM Security Admin Service Account Admin Service Account Key Admin Service Account User Storage Admin Required roles for creating network resources during installation DNS Administrator Required roles for using passthrough credentials mode Compute Load Balancer Admin IAM Role Viewer Required roles for user-provisioned GCP infrastructure Deployment Manager Editor The roles are applied to the service accounts that the control plane and compute machines use: Table 11.4. GCP service account permissions Account Roles Control Plane roles/compute.instanceAdmin roles/compute.networkAdmin roles/compute.securityAdmin roles/storage.admin roles/iam.serviceAccountUser Compute roles/compute.viewer roles/storage.admin 11.4.7. Required GCP permissions for user-provisioned infrastructure When you attach the Owner role to the service account that you create, you grant that service account all permissions, including those that are required to install OpenShift Container Platform. If the security policies for your organization require a more restrictive set of permissions, you can create custom roles with the necessary permissions. The following permissions are required for the user-provisioned infrastructure for creating and deleting the OpenShift Container Platform cluster. Important If you configure the Cloud Credential Operator to operate in passthrough mode, you must use roles rather than granular permissions. For more information, see "Required roles for using passthrough credentials mode" in the "Required GCP roles" section. Example 11.1. Required permissions for creating network resources compute.addresses.create compute.addresses.createInternal compute.addresses.delete compute.addresses.get compute.addresses.list compute.addresses.use compute.addresses.useInternal compute.firewalls.create compute.firewalls.delete compute.firewalls.get compute.firewalls.list compute.forwardingRules.create compute.forwardingRules.get compute.forwardingRules.list compute.forwardingRules.setLabels compute.networks.create compute.networks.get compute.networks.list compute.networks.updatePolicy compute.routers.create compute.routers.get compute.routers.list compute.routers.update compute.routes.list compute.subnetworks.create compute.subnetworks.get compute.subnetworks.list compute.subnetworks.use compute.subnetworks.useExternalIp Example 11.2. Required permissions for creating load balancer resources compute.regionBackendServices.create compute.regionBackendServices.get compute.regionBackendServices.list compute.regionBackendServices.update compute.regionBackendServices.use compute.targetPools.addInstance compute.targetPools.create compute.targetPools.get compute.targetPools.list compute.targetPools.removeInstance compute.targetPools.use Example 11.3. Required permissions for creating DNS resources dns.changes.create dns.changes.get dns.managedZones.create dns.managedZones.get dns.managedZones.list dns.networks.bindPrivateDNSZone dns.resourceRecordSets.create dns.resourceRecordSets.list dns.resourceRecordSets.update Example 11.4. Required permissions for creating Service Account resources iam.serviceAccountKeys.create iam.serviceAccountKeys.delete iam.serviceAccountKeys.get iam.serviceAccountKeys.list iam.serviceAccounts.actAs iam.serviceAccounts.create iam.serviceAccounts.delete iam.serviceAccounts.get iam.serviceAccounts.list resourcemanager.projects.get resourcemanager.projects.getIamPolicy resourcemanager.projects.setIamPolicy Example 11.5. Required permissions for creating compute resources compute.disks.create compute.disks.get compute.disks.list compute.instanceGroups.create compute.instanceGroups.delete compute.instanceGroups.get compute.instanceGroups.list compute.instanceGroups.update compute.instanceGroups.use compute.instances.create compute.instances.delete compute.instances.get compute.instances.list compute.instances.setLabels compute.instances.setMetadata compute.instances.setServiceAccount compute.instances.setTags compute.instances.use compute.machineTypes.get compute.machineTypes.list Example 11.6. Required for creating storage resources storage.buckets.create storage.buckets.delete storage.buckets.get storage.buckets.list storage.objects.create storage.objects.delete storage.objects.get storage.objects.list Example 11.7. Required permissions for creating health check resources compute.healthChecks.create compute.healthChecks.get compute.healthChecks.list compute.healthChecks.useReadOnly compute.httpHealthChecks.create compute.httpHealthChecks.get compute.httpHealthChecks.list compute.httpHealthChecks.useReadOnly Example 11.8. Required permissions to get GCP zone and region related information compute.globalOperations.get compute.regionOperations.get compute.regions.list compute.zoneOperations.get compute.zones.get compute.zones.list Example 11.9. Required permissions for checking services and quotas monitoring.timeSeries.list serviceusage.quotas.get serviceusage.services.list Example 11.10. Required IAM permissions for installation iam.roles.get Example 11.11. Required Images permissions for installation compute.images.create compute.images.delete compute.images.get compute.images.list Example 11.12. Optional permission for running gather bootstrap compute.instances.getSerialPortOutput Example 11.13. Required permissions for deleting network resources compute.addresses.delete compute.addresses.deleteInternal compute.addresses.list compute.firewalls.delete compute.firewalls.list compute.forwardingRules.delete compute.forwardingRules.list compute.networks.delete compute.networks.list compute.networks.updatePolicy compute.routers.delete compute.routers.list compute.routes.list compute.subnetworks.delete compute.subnetworks.list Example 11.14. Required permissions for deleting load balancer resources compute.regionBackendServices.delete compute.regionBackendServices.list compute.targetPools.delete compute.targetPools.list Example 11.15. Required permissions for deleting DNS resources dns.changes.create dns.managedZones.delete dns.managedZones.get dns.managedZones.list dns.resourceRecordSets.delete dns.resourceRecordSets.list Example 11.16. Required permissions for deleting Service Account resources iam.serviceAccounts.delete iam.serviceAccounts.get iam.serviceAccounts.list resourcemanager.projects.getIamPolicy resourcemanager.projects.setIamPolicy Example 11.17. Required permissions for deleting compute resources compute.disks.delete compute.disks.list compute.instanceGroups.delete compute.instanceGroups.list compute.instances.delete compute.instances.list compute.instances.stop compute.machineTypes.list Example 11.18. Required for deleting storage resources storage.buckets.delete storage.buckets.getIamPolicy storage.buckets.list storage.objects.delete storage.objects.list Example 11.19. Required permissions for deleting health check resources compute.healthChecks.delete compute.healthChecks.list compute.httpHealthChecks.delete compute.httpHealthChecks.list Example 11.20. Required Images permissions for deletion compute.images.delete compute.images.list Example 11.21. Required permissions to get Region related information compute.regions.get Example 11.22. Required Deployment Manager permissions deploymentmanager.deployments.create deploymentmanager.deployments.delete deploymentmanager.deployments.get deploymentmanager.deployments.list deploymentmanager.manifests.get deploymentmanager.operations.get deploymentmanager.resources.list Additional resources Optimizing storage 11.4.8. Supported GCP regions You can deploy an OpenShift Container Platform cluster to the following Google Cloud Platform (GCP) regions: asia-east1 (Changhua County, Taiwan) asia-east2 (Hong Kong) asia-northeast1 (Tokyo, Japan) asia-northeast2 (Osaka, Japan) asia-northeast3 (Seoul, South Korea) asia-south1 (Mumbai, India) asia-south2 (Delhi, India) asia-southeast1 (Jurong West, Singapore) asia-southeast2 (Jakarta, Indonesia) australia-southeast1 (Sydney, Australia) australia-southeast2 (Melbourne, Australia) europe-central2 (Warsaw, Poland) europe-north1 (Hamina, Finland) europe-southwest1 (Madrid, Spain) europe-west1 (St. Ghislain, Belgium) europe-west2 (London, England, UK) europe-west3 (Frankfurt, Germany) europe-west4 (Eemshaven, Netherlands) europe-west6 (Zurich, Switzerland) europe-west8 (Milan, Italy) europe-west9 (Paris, France) europe-west12 (Turin, Italy) me-central1 (Doha, Qatar, Middle East) me-west1 (Tel Aviv, Israel) northamerica-northeast1 (Montreal, Quebec, Canada) northamerica-northeast2 (Toronto, Ontario, Canada) southamerica-east1 (Sao Paulo, Brazil) southamerica-west1 (Santiago, Chile) us-central1 (Council Bluffs, Iowa, USA) us-east1 (Moncks Corner, South Carolina, USA) us-east4 (Ashburn, Northern Virginia, USA) us-east5 (Columbus, Ohio) us-south1 (Dallas, Texas) us-west1 (The Dalles, Oregon, USA) us-west2 (Los Angeles, California, USA) us-west3 (Salt Lake City, Utah, USA) us-west4 (Las Vegas, Nevada, USA) Note To determine which machine type instances are available by region and zone, see the Google documentation . 11.4.9. Installing and configuring CLI tools for GCP To install OpenShift Container Platform on Google Cloud Platform (GCP) using user-provisioned infrastructure, you must install and configure the CLI tools for GCP. Prerequisites You created a project to host your cluster. You created a service account and granted it the required permissions. Procedure Install the following binaries in USDPATH : gcloud gsutil See Install the latest Cloud SDK version in the GCP documentation. Authenticate using the gcloud tool with your configured service account. See Authorizing with a service account in the GCP documentation. 11.5. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 11.5.1. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 11.5. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To maintain high availability of your cluster, use separate physical hosts for these cluster machines. The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. However, the compute machines can choose between Red Hat Enterprise Linux CoreOS (RHCOS), Red Hat Enterprise Linux (RHEL) 8.6 and later. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 8 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 11.5.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 11.6. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or hyperthreading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 11.5.3. Tested instance types for GCP The following Google Cloud Platform instance types have been tested with OpenShift Container Platform. Example 11.23. Machine series A2 A3 C2 C2D C3 C3D C4 E2 M1 N1 N2 N2D N4 Tau T2D 11.5.4. Using custom machine types Using a custom machine type to install a OpenShift Container Platform cluster is supported. Consider the following when using a custom machine type: Similar to predefined instance types, custom machine types must meet the minimum resource requirements for control plane and compute machines. For more information, see "Minimum resource requirements for cluster installation". The name of the custom machine type must adhere to the following syntax: custom-<number_of_cpus>-<amount_of_memory_in_mb> For example, custom-6-20480 . 11.6. Creating the installation files for GCP To install OpenShift Container Platform on Google Cloud Platform (GCP) using user-provisioned infrastructure, you must generate the files that the installation program needs to deploy your cluster and modify them so that the cluster creates only the machines that it will use. You generate and customize the install-config.yaml file, Kubernetes manifests, and Ignition config files. You also have the option to first set up a separate var partition during the preparation phases of installation. 11.6.1. Optional: Creating a separate /var partition It is recommended that disk partitioning for OpenShift Container Platform be left to the installer. However, there are cases where you might want to create separate partitions in a part of the filesystem that you expect to grow. OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var partition or a subdirectory of /var . For example: /var/lib/containers : Holds container-related content that can grow as more images and containers are added to a system. /var/lib/etcd : Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage. /var : Holds data that you might want to keep separate for purposes such as auditing. Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems. Because /var must be in place before a fresh installation of Red Hat Enterprise Linux CoreOS (RHCOS), the following procedure sets up the separate /var partition by creating a machine config manifest that is inserted during the openshift-install preparation phases of an OpenShift Container Platform installation. Important If you follow the steps to create a separate /var partition in this procedure, it is not necessary to create the Kubernetes manifest and Ignition config files again as described later in this section. Procedure Create a directory to hold the OpenShift Container Platform installation files: USD mkdir USDHOME/clusterconfig Run openshift-install to create a set of files in the manifest and openshift subdirectories. Answer the system questions as you are prompted: USD openshift-install create manifests --dir USDHOME/clusterconfig Example output ? SSH Public Key ... INFO Credentials loaded from the "myprofile" profile in file "/home/myuser/.aws/credentials" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift Optional: Confirm that the installation program created manifests in the clusterconfig/openshift directory: USD ls USDHOME/clusterconfig/openshift/ Example output 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml ... Create a Butane config that configures the additional partition. For example, name the file USDHOME/clusterconfig/98-var-partition.bu , change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition: variant: openshift version: 4.12.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_id> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true 1 The storage device name of the disk that you want to partition. 2 When adding a data partition to the boot disk, a minimum value of 25000 MiB (Mebibytes) is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. 3 The size of the data partition in mebibytes. 4 The prjquota mount option must be enabled for filesystems used for container storage. Note When creating a separate /var partition, you cannot use different instance types for worker nodes, if the different instance types do not have the same device name. Create a manifest from the Butane config and save it to the clusterconfig/openshift directory. For example, run the following command: USD butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml Run openshift-install again to create Ignition configs from a set of files in the manifest and openshift subdirectories: USD openshift-install create ignition-configs --dir USDHOME/clusterconfig USD ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign Now you can use the Ignition config files as input to the installation procedures to install Red Hat Enterprise Linux CoreOS (RHCOS) systems. 11.6.2. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Google Cloud Platform (GCP). Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select gcp as the platform to target. If you have not configured the service account key for your GCP account on your computer, you must obtain it from GCP and paste the contents of the file or enter the absolute path to the file. Select the project ID to provision the cluster in. The default value is specified by the service account that you configured. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster. Enter a descriptive name for your cluster. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Optional: If you do not want the cluster to provision compute machines, empty the compute pool by editing the resulting install-config.yaml file to set replicas to 0 for the compute pool: compute: - hyperthreading: Enabled name: worker platform: {} replicas: 0 1 1 Set to 0 . Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 11.6.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 11.6.4. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Remove the Kubernetes manifest files that define the control plane machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml By removing these files, you prevent the cluster from automatically generating control plane machines. Optional: If you do not want the cluster to provision compute machines, remove the Kubernetes manifest files that define the worker machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml Because you create and manage the worker machines yourself, you do not need to initialize these machines. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. Optional: If you do not want the Ingress Operator to create DNS records on your behalf, remove the privateZone and publicZone sections from the <installation_directory>/manifests/cluster-dns-02-config.yml DNS configuration file: apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {} 1 2 Remove this section completely. If you do so, you must add ingress DNS records manually in a later step. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: Additional resources Optional: Adding the ingress DNS records 11.7. Exporting common variables 11.7.1. Extracting the infrastructure name The Ignition config files contain a unique cluster identifier that you can use to uniquely identify your cluster in Google Cloud Platform (GCP). The infrastructure name is also used to locate the appropriate GCP resources during an OpenShift Container Platform installation. The provided Deployment Manager templates contain references to this infrastructure name, so you must extract it. Prerequisites You obtained the OpenShift Container Platform installation program and the pull secret for your cluster. You generated the Ignition config files for your cluster. You installed the jq package. Procedure To extract and view the infrastructure name from the Ignition config file metadata, run the following command: USD jq -r .infraID <installation_directory>/metadata.json 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output openshift-vw9j6 1 1 The output of this command is your cluster name and a random string. 11.7.2. Exporting common variables for Deployment Manager templates You must export a common set of variables that are used with the provided Deployment Manager templates used to assist in completing a user-provided infrastructure install on Google Cloud Platform (GCP). Note Specific Deployment Manager templates can also require additional exported variables, which are detailed in their related procedures. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Generate the Ignition config files for your cluster. Install the jq package. Procedure Export the following common variables to be used by the provided Deployment Manager templates: USD export BASE_DOMAIN='<base_domain>' USD export BASE_DOMAIN_ZONE_NAME='<base_domain_zone_name>' USD export NETWORK_CIDR='10.0.0.0/16' USD export MASTER_SUBNET_CIDR='10.0.0.0/17' USD export WORKER_SUBNET_CIDR='10.0.128.0/17' USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 USD export CLUSTER_NAME=`jq -r .clusterName <installation_directory>/metadata.json` USD export INFRA_ID=`jq -r .infraID <installation_directory>/metadata.json` USD export PROJECT_NAME=`jq -r .gcp.projectID <installation_directory>/metadata.json` USD export REGION=`jq -r .gcp.region <installation_directory>/metadata.json` 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 11.8. Creating a VPC in GCP You must create a VPC in Google Cloud Platform (GCP) for your OpenShift Container Platform cluster to use. You can customize the VPC to meet your requirements. One way to create the VPC is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your GCP infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure a GCP account. Generate the Ignition config files for your cluster. Procedure Copy the template from the Deployment Manager template for the VPC section of this topic and save it as 01_vpc.py on your computer. This template describes the VPC that your cluster requires. Create a 01_vpc.yaml resource definition file: USD cat <<EOF >01_vpc.yaml imports: - path: 01_vpc.py resources: - name: cluster-vpc type: 01_vpc.py properties: infra_id: 'USD{INFRA_ID}' 1 region: 'USD{REGION}' 2 master_subnet_cidr: 'USD{MASTER_SUBNET_CIDR}' 3 worker_subnet_cidr: 'USD{WORKER_SUBNET_CIDR}' 4 EOF 1 infra_id is the INFRA_ID infrastructure name from the extraction step. 2 region is the region to deploy the cluster into, for example us-central1 . 3 master_subnet_cidr is the CIDR for the master subnet, for example 10.0.0.0/17 . 4 worker_subnet_cidr is the CIDR for the worker subnet, for example 10.0.128.0/17 . Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-vpc --config 01_vpc.yaml 11.8.1. Deployment Manager template for the VPC You can use the following Deployment Manager template to deploy the VPC that you need for your OpenShift Container Platform cluster: Example 11.24. 01_vpc.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-network', 'type': 'compute.v1.network', 'properties': { 'region': context.properties['region'], 'autoCreateSubnetworks': False } }, { 'name': context.properties['infra_id'] + '-master-subnet', 'type': 'compute.v1.subnetwork', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'ipCidrRange': context.properties['master_subnet_cidr'] } }, { 'name': context.properties['infra_id'] + '-worker-subnet', 'type': 'compute.v1.subnetwork', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'ipCidrRange': context.properties['worker_subnet_cidr'] } }, { 'name': context.properties['infra_id'] + '-router', 'type': 'compute.v1.router', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'nats': [{ 'name': context.properties['infra_id'] + '-nat-master', 'natIpAllocateOption': 'AUTO_ONLY', 'minPortsPerVm': 7168, 'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS', 'subnetworks': [{ 'name': 'USD(ref.' + context.properties['infra_id'] + '-master-subnet.selfLink)', 'sourceIpRangesToNat': ['ALL_IP_RANGES'] }] }, { 'name': context.properties['infra_id'] + '-nat-worker', 'natIpAllocateOption': 'AUTO_ONLY', 'minPortsPerVm': 512, 'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS', 'subnetworks': [{ 'name': 'USD(ref.' + context.properties['infra_id'] + '-worker-subnet.selfLink)', 'sourceIpRangesToNat': ['ALL_IP_RANGES'] }] }] } }] return {'resources': resources} 11.9. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. 11.9.1. Setting the cluster node hostnames through DHCP On Red Hat Enterprise Linux CoreOS (RHCOS) machines, the hostname is set through NetworkManager. By default, the machines obtain their hostname through DHCP. If the hostname is not provided by DHCP, set statically through kernel arguments, or another method, it is obtained through a reverse DNS lookup. Reverse DNS lookup occurs after the network has been initialized on a node and can take time to resolve. Other system services can start prior to this and detect the hostname as localhost or similar. You can avoid this by using DHCP to provide the hostname for each cluster node. Additionally, setting the hostnames through DHCP can bypass any manual DNS record name configuration errors in environments that have a DNS split-horizon implementation. 11.9.2. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Important In connected OpenShift Container Platform environments, all nodes are required to have internet access to pull images for platform containers and provide telemetry data to Red Hat. Table 11.7. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If an external NTP time server is configured, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 11.8. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 11.9. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports 11.10. Creating load balancers in GCP You must configure load balancers in Google Cloud Platform (GCP) for your OpenShift Container Platform cluster to use. One way to create these components is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your GCP infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure a GCP account. Generate the Ignition config files for your cluster. Create and configure a VPC and associated subnets in GCP. Procedure Copy the template from the Deployment Manager template for the internal load balancer section of this topic and save it as 02_lb_int.py on your computer. This template describes the internal load balancing objects that your cluster requires. For an external cluster, also copy the template from the Deployment Manager template for the external load balancer section of this topic and save it as 02_lb_ext.py on your computer. This template describes the external load balancing objects that your cluster requires. Export the variables that the deployment template uses: Export the cluster network location: USD export CLUSTER_NETWORK=(`gcloud compute networks describe USD{INFRA_ID}-network --format json | jq -r .selfLink`) Export the control plane subnet location: USD export CONTROL_SUBNET=(`gcloud compute networks subnets describe USD{INFRA_ID}-master-subnet --region=USD{REGION} --format json | jq -r .selfLink`) Export the three zones that the cluster uses: USD export ZONE_0=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[0] | cut -d "/" -f9`) USD export ZONE_1=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[1] | cut -d "/" -f9`) USD export ZONE_2=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[2] | cut -d "/" -f9`) Create a 02_infra.yaml resource definition file: USD cat <<EOF >02_infra.yaml imports: - path: 02_lb_ext.py - path: 02_lb_int.py 1 resources: - name: cluster-lb-ext 2 type: 02_lb_ext.py properties: infra_id: 'USD{INFRA_ID}' 3 region: 'USD{REGION}' 4 - name: cluster-lb-int type: 02_lb_int.py properties: cluster_network: 'USD{CLUSTER_NETWORK}' control_subnet: 'USD{CONTROL_SUBNET}' 5 infra_id: 'USD{INFRA_ID}' region: 'USD{REGION}' zones: 6 - 'USD{ZONE_0}' - 'USD{ZONE_1}' - 'USD{ZONE_2}' EOF 1 2 Required only when deploying an external cluster. 3 infra_id is the INFRA_ID infrastructure name from the extraction step. 4 region is the region to deploy the cluster into, for example us-central1 . 5 control_subnet is the URI to the control subnet. 6 zones are the zones to deploy the control plane instances into, like us-east1-b , us-east1-c , and us-east1-d . Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-infra --config 02_infra.yaml Export the cluster IP address: USD export CLUSTER_IP=(`gcloud compute addresses describe USD{INFRA_ID}-cluster-ip --region=USD{REGION} --format json | jq -r .address`) For an external cluster, also export the cluster public IP address: USD export CLUSTER_PUBLIC_IP=(`gcloud compute addresses describe USD{INFRA_ID}-cluster-public-ip --region=USD{REGION} --format json | jq -r .address`) 11.10.1. Deployment Manager template for the external load balancer You can use the following Deployment Manager template to deploy the external load balancer that you need for your OpenShift Container Platform cluster: Example 11.25. 02_lb_ext.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-cluster-public-ip', 'type': 'compute.v1.address', 'properties': { 'region': context.properties['region'] } }, { # Refer to docs/dev/kube-apiserver-health-check.md on how to correctly setup health check probe for kube-apiserver 'name': context.properties['infra_id'] + '-api-http-health-check', 'type': 'compute.v1.httpHealthCheck', 'properties': { 'port': 6080, 'requestPath': '/readyz' } }, { 'name': context.properties['infra_id'] + '-api-target-pool', 'type': 'compute.v1.targetPool', 'properties': { 'region': context.properties['region'], 'healthChecks': ['USD(ref.' + context.properties['infra_id'] + '-api-http-health-check.selfLink)'], 'instances': [] } }, { 'name': context.properties['infra_id'] + '-api-forwarding-rule', 'type': 'compute.v1.forwardingRule', 'properties': { 'region': context.properties['region'], 'IPAddress': 'USD(ref.' + context.properties['infra_id'] + '-cluster-public-ip.selfLink)', 'target': 'USD(ref.' + context.properties['infra_id'] + '-api-target-pool.selfLink)', 'portRange': '6443' } }] return {'resources': resources} 11.10.2. Deployment Manager template for the internal load balancer You can use the following Deployment Manager template to deploy the internal load balancer that you need for your OpenShift Container Platform cluster: Example 11.26. 02_lb_int.py Deployment Manager template def GenerateConfig(context): backends = [] for zone in context.properties['zones']: backends.append({ 'group': 'USD(ref.' + context.properties['infra_id'] + '-master-' + zone + '-ig' + '.selfLink)' }) resources = [{ 'name': context.properties['infra_id'] + '-cluster-ip', 'type': 'compute.v1.address', 'properties': { 'addressType': 'INTERNAL', 'region': context.properties['region'], 'subnetwork': context.properties['control_subnet'] } }, { # Refer to docs/dev/kube-apiserver-health-check.md on how to correctly setup health check probe for kube-apiserver 'name': context.properties['infra_id'] + '-api-internal-health-check', 'type': 'compute.v1.healthCheck', 'properties': { 'httpsHealthCheck': { 'port': 6443, 'requestPath': '/readyz' }, 'type': "HTTPS" } }, { 'name': context.properties['infra_id'] + '-api-internal-backend-service', 'type': 'compute.v1.regionBackendService', 'properties': { 'backends': backends, 'healthChecks': ['USD(ref.' + context.properties['infra_id'] + '-api-internal-health-check.selfLink)'], 'loadBalancingScheme': 'INTERNAL', 'region': context.properties['region'], 'protocol': 'TCP', 'timeoutSec': 120 } }, { 'name': context.properties['infra_id'] + '-api-internal-forwarding-rule', 'type': 'compute.v1.forwardingRule', 'properties': { 'backendService': 'USD(ref.' + context.properties['infra_id'] + '-api-internal-backend-service.selfLink)', 'IPAddress': 'USD(ref.' + context.properties['infra_id'] + '-cluster-ip.selfLink)', 'loadBalancingScheme': 'INTERNAL', 'ports': ['6443','22623'], 'region': context.properties['region'], 'subnetwork': context.properties['control_subnet'] } }] for zone in context.properties['zones']: resources.append({ 'name': context.properties['infra_id'] + '-master-' + zone + '-ig', 'type': 'compute.v1.instanceGroup', 'properties': { 'namedPorts': [ { 'name': 'ignition', 'port': 22623 }, { 'name': 'https', 'port': 6443 } ], 'network': context.properties['cluster_network'], 'zone': zone } }) return {'resources': resources} You will need this template in addition to the 02_lb_ext.py template when you create an external cluster. 11.11. Creating a private DNS zone in GCP You must configure a private DNS zone in Google Cloud Platform (GCP) for your OpenShift Container Platform cluster to use. One way to create this component is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your GCP infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure a GCP account. Generate the Ignition config files for your cluster. Create and configure a VPC and associated subnets in GCP. Procedure Copy the template from the Deployment Manager template for the private DNS section of this topic and save it as 02_dns.py on your computer. This template describes the private DNS objects that your cluster requires. Create a 02_dns.yaml resource definition file: USD cat <<EOF >02_dns.yaml imports: - path: 02_dns.py resources: - name: cluster-dns type: 02_dns.py properties: infra_id: 'USD{INFRA_ID}' 1 cluster_domain: 'USD{CLUSTER_NAME}.USD{BASE_DOMAIN}' 2 cluster_network: 'USD{CLUSTER_NETWORK}' 3 EOF 1 infra_id is the INFRA_ID infrastructure name from the extraction step. 2 cluster_domain is the domain for the cluster, for example openshift.example.com . 3 cluster_network is the selfLink URL to the cluster network. Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-dns --config 02_dns.yaml The templates do not create DNS entries due to limitations of Deployment Manager, so you must create them manually: Add the internal DNS entries: USD if [ -f transaction.yaml ]; then rm transaction.yaml; fi USD gcloud dns record-sets transaction start --zone USD{INFRA_ID}-private-zone USD gcloud dns record-sets transaction add USD{CLUSTER_IP} --name api.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{INFRA_ID}-private-zone USD gcloud dns record-sets transaction add USD{CLUSTER_IP} --name api-int.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{INFRA_ID}-private-zone USD gcloud dns record-sets transaction execute --zone USD{INFRA_ID}-private-zone For an external cluster, also add the external DNS entries: USD if [ -f transaction.yaml ]; then rm transaction.yaml; fi USD gcloud dns record-sets transaction start --zone USD{BASE_DOMAIN_ZONE_NAME} USD gcloud dns record-sets transaction add USD{CLUSTER_PUBLIC_IP} --name api.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{BASE_DOMAIN_ZONE_NAME} USD gcloud dns record-sets transaction execute --zone USD{BASE_DOMAIN_ZONE_NAME} 11.11.1. Deployment Manager template for the private DNS You can use the following Deployment Manager template to deploy the private DNS that you need for your OpenShift Container Platform cluster: Example 11.27. 02_dns.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-private-zone', 'type': 'dns.v1.managedZone', 'properties': { 'description': '', 'dnsName': context.properties['cluster_domain'] + '.', 'visibility': 'private', 'privateVisibilityConfig': { 'networks': [{ 'networkUrl': context.properties['cluster_network'] }] } } }] return {'resources': resources} 11.12. Creating firewall rules in GCP You must create firewall rules in Google Cloud Platform (GCP) for your OpenShift Container Platform cluster to use. One way to create these components is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your GCP infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure a GCP account. Generate the Ignition config files for your cluster. Create and configure a VPC and associated subnets in GCP. Procedure Copy the template from the Deployment Manager template for firewall rules section of this topic and save it as 03_firewall.py on your computer. This template describes the security groups that your cluster requires. Create a 03_firewall.yaml resource definition file: USD cat <<EOF >03_firewall.yaml imports: - path: 03_firewall.py resources: - name: cluster-firewall type: 03_firewall.py properties: allowed_external_cidr: '0.0.0.0/0' 1 infra_id: 'USD{INFRA_ID}' 2 cluster_network: 'USD{CLUSTER_NETWORK}' 3 network_cidr: 'USD{NETWORK_CIDR}' 4 EOF 1 allowed_external_cidr is the CIDR range that can access the cluster API and SSH to the bootstrap host. For an internal cluster, set this value to USD{NETWORK_CIDR} . 2 infra_id is the INFRA_ID infrastructure name from the extraction step. 3 cluster_network is the selfLink URL to the cluster network. 4 network_cidr is the CIDR of the VPC network, for example 10.0.0.0/16 . Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-firewall --config 03_firewall.yaml 11.12.1. Deployment Manager template for firewall rules You can use the following Deployment Manager template to deploy the firewall rues that you need for your OpenShift Container Platform cluster: Example 11.28. 03_firewall.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-bootstrap-in-ssh', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['22'] }], 'sourceRanges': [context.properties['allowed_external_cidr']], 'targetTags': [context.properties['infra_id'] + '-bootstrap'] } }, { 'name': context.properties['infra_id'] + '-api', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['6443'] }], 'sourceRanges': [context.properties['allowed_external_cidr']], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-health-checks', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['6080', '6443', '22624'] }], 'sourceRanges': ['35.191.0.0/16', '130.211.0.0/22', '209.85.152.0/22', '209.85.204.0/22'], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-etcd', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['2379-2380'] }], 'sourceTags': [context.properties['infra_id'] + '-master'], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-control-plane', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['10257'] },{ 'IPProtocol': 'tcp', 'ports': ['10259'] },{ 'IPProtocol': 'tcp', 'ports': ['22623'] }], 'sourceTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-internal-network', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'icmp' },{ 'IPProtocol': 'tcp', 'ports': ['22'] }], 'sourceRanges': [context.properties['network_cidr']], 'targetTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ] } }, { 'name': context.properties['infra_id'] + '-internal-cluster', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'udp', 'ports': ['4789', '6081'] },{ 'IPProtocol': 'udp', 'ports': ['500', '4500'] },{ 'IPProtocol': 'esp', },{ 'IPProtocol': 'tcp', 'ports': ['9000-9999'] },{ 'IPProtocol': 'udp', 'ports': ['9000-9999'] },{ 'IPProtocol': 'tcp', 'ports': ['10250'] },{ 'IPProtocol': 'tcp', 'ports': ['30000-32767'] },{ 'IPProtocol': 'udp', 'ports': ['30000-32767'] }], 'sourceTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ], 'targetTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ] } }] return {'resources': resources} 11.13. Creating IAM roles in GCP You must create IAM roles in Google Cloud Platform (GCP) for your OpenShift Container Platform cluster to use. One way to create these components is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your GCP infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure a GCP account. Generate the Ignition config files for your cluster. Create and configure a VPC and associated subnets in GCP. Procedure Copy the template from the Deployment Manager template for IAM roles section of this topic and save it as 03_iam.py on your computer. This template describes the IAM roles that your cluster requires. Create a 03_iam.yaml resource definition file: USD cat <<EOF >03_iam.yaml imports: - path: 03_iam.py resources: - name: cluster-iam type: 03_iam.py properties: infra_id: 'USD{INFRA_ID}' 1 EOF 1 infra_id is the INFRA_ID infrastructure name from the extraction step. Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-iam --config 03_iam.yaml Export the variable for the master service account: USD export MASTER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter "email~^USD{INFRA_ID}-m@USD{PROJECT_NAME}." --format json | jq -r '.[0].email'`) Export the variable for the worker service account: USD export WORKER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter "email~^USD{INFRA_ID}-w@USD{PROJECT_NAME}." --format json | jq -r '.[0].email'`) Export the variable for the subnet that hosts the compute machines: USD export COMPUTE_SUBNET=(`gcloud compute networks subnets describe USD{INFRA_ID}-worker-subnet --region=USD{REGION} --format json | jq -r .selfLink`) The templates do not create the policy bindings due to limitations of Deployment Manager, so you must create them manually: USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{MASTER_SERVICE_ACCOUNT}" --role "roles/compute.instanceAdmin" USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{MASTER_SERVICE_ACCOUNT}" --role "roles/compute.networkAdmin" USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{MASTER_SERVICE_ACCOUNT}" --role "roles/compute.securityAdmin" USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{MASTER_SERVICE_ACCOUNT}" --role "roles/iam.serviceAccountUser" USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{MASTER_SERVICE_ACCOUNT}" --role "roles/storage.admin" USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{WORKER_SERVICE_ACCOUNT}" --role "roles/compute.viewer" USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{WORKER_SERVICE_ACCOUNT}" --role "roles/storage.admin" Create a service account key and store it locally for later use: USD gcloud iam service-accounts keys create service-account-key.json --iam-account=USD{MASTER_SERVICE_ACCOUNT} 11.13.1. Deployment Manager template for IAM roles You can use the following Deployment Manager template to deploy the IAM roles that you need for your OpenShift Container Platform cluster: Example 11.29. 03_iam.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-master-node-sa', 'type': 'iam.v1.serviceAccount', 'properties': { 'accountId': context.properties['infra_id'] + '-m', 'displayName': context.properties['infra_id'] + '-master-node' } }, { 'name': context.properties['infra_id'] + '-worker-node-sa', 'type': 'iam.v1.serviceAccount', 'properties': { 'accountId': context.properties['infra_id'] + '-w', 'displayName': context.properties['infra_id'] + '-worker-node' } }] return {'resources': resources} 11.14. Creating the RHCOS cluster image for the GCP infrastructure You must use a valid Red Hat Enterprise Linux CoreOS (RHCOS) image for Google Cloud Platform (GCP) for your OpenShift Container Platform nodes. Procedure Obtain the RHCOS image from the RHCOS image mirror page. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download an image with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image version that matches your OpenShift Container Platform version if it is available. The file name contains the OpenShift Container Platform version number in the format rhcos-<version>-<arch>-gcp.<arch>.tar.gz . Create the Google storage bucket: USD gsutil mb gs://<bucket_name> Upload the RHCOS image to the Google storage bucket: USD gsutil cp <downloaded_image_file_path>/rhcos-<version>-x86_64-gcp.x86_64.tar.gz gs://<bucket_name> Export the uploaded RHCOS image location as a variable: USD export IMAGE_SOURCE=gs://<bucket_name>/rhcos-<version>-x86_64-gcp.x86_64.tar.gz Create the cluster image: USD gcloud compute images create "USD{INFRA_ID}-rhcos-image" \ --source-uri="USD{IMAGE_SOURCE}" 11.15. Creating the bootstrap machine in GCP You must create the bootstrap machine in Google Cloud Platform (GCP) to use during OpenShift Container Platform cluster initialization. One way to create this machine is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your bootstrap machine, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure a GCP account. Generate the Ignition config files for your cluster. Create and configure a VPC and associated subnets in GCP. Create and configure networking and load balancers in GCP. Create control plane and compute roles. Ensure pyOpenSSL is installed. Procedure Copy the template from the Deployment Manager template for the bootstrap machine section of this topic and save it as 04_bootstrap.py on your computer. This template describes the bootstrap machine that your cluster requires. Export the location of the Red Hat Enterprise Linux CoreOS (RHCOS) image that the installation program requires: USD export CLUSTER_IMAGE=(`gcloud compute images describe USD{INFRA_ID}-rhcos-image --format json | jq -r .selfLink`) Create a bucket and upload the bootstrap.ign file: USD gsutil mb gs://USD{INFRA_ID}-bootstrap-ignition USD gsutil cp <installation_directory>/bootstrap.ign gs://USD{INFRA_ID}-bootstrap-ignition/ Create a signed URL for the bootstrap instance to use to access the Ignition config. Export the URL from the output as a variable: USD export BOOTSTRAP_IGN=`gsutil signurl -d 1h service-account-key.json gs://USD{INFRA_ID}-bootstrap-ignition/bootstrap.ign | grep "^gs:" | awk '{print USD5}'` Create a 04_bootstrap.yaml resource definition file: USD cat <<EOF >04_bootstrap.yaml imports: - path: 04_bootstrap.py resources: - name: cluster-bootstrap type: 04_bootstrap.py properties: infra_id: 'USD{INFRA_ID}' 1 region: 'USD{REGION}' 2 zone: 'USD{ZONE_0}' 3 cluster_network: 'USD{CLUSTER_NETWORK}' 4 control_subnet: 'USD{CONTROL_SUBNET}' 5 image: 'USD{CLUSTER_IMAGE}' 6 machine_type: 'n1-standard-4' 7 root_volume_size: '128' 8 bootstrap_ign: 'USD{BOOTSTRAP_IGN}' 9 EOF 1 infra_id is the INFRA_ID infrastructure name from the extraction step. 2 region is the region to deploy the cluster into, for example us-central1 . 3 zone is the zone to deploy the bootstrap instance into, for example us-central1-b . 4 cluster_network is the selfLink URL to the cluster network. 5 control_subnet is the selfLink URL to the control subnet. 6 image is the selfLink URL to the RHCOS image. 7 machine_type is the machine type of the instance, for example n1-standard-4 . 8 root_volume_size is the boot disk size for the bootstrap machine. 9 bootstrap_ign is the URL output when creating a signed URL. Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-bootstrap --config 04_bootstrap.yaml The templates do not manage load balancer membership due to limitations of Deployment Manager, so you must add the bootstrap machine manually. Add the bootstrap instance to the internal load balancer instance group: USD gcloud compute instance-groups unmanaged add-instances \ USD{INFRA_ID}-bootstrap-ig --zone=USD{ZONE_0} --instances=USD{INFRA_ID}-bootstrap Add the bootstrap instance group to the internal load balancer backend service: USD gcloud compute backend-services add-backend \ USD{INFRA_ID}-api-internal-backend-service --region=USD{REGION} --instance-group=USD{INFRA_ID}-bootstrap-ig --instance-group-zone=USD{ZONE_0} 11.15.1. Deployment Manager template for the bootstrap machine You can use the following Deployment Manager template to deploy the bootstrap machine that you need for your OpenShift Container Platform cluster: Example 11.30. 04_bootstrap.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-bootstrap-public-ip', 'type': 'compute.v1.address', 'properties': { 'region': context.properties['region'] } }, { 'name': context.properties['infra_id'] + '-bootstrap', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zone'] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': '{"ignition":{"config":{"replace":{"source":"' + context.properties['bootstrap_ign'] + '"}},"version":"3.2.0"}}', }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'], 'accessConfigs': [{ 'natIP': 'USD(ref.' + context.properties['infra_id'] + '-bootstrap-public-ip.address)' }] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-bootstrap' ] }, 'zone': context.properties['zone'] } }, { 'name': context.properties['infra_id'] + '-bootstrap-ig', 'type': 'compute.v1.instanceGroup', 'properties': { 'namedPorts': [ { 'name': 'ignition', 'port': 22623 }, { 'name': 'https', 'port': 6443 } ], 'network': context.properties['cluster_network'], 'zone': context.properties['zone'] } }] return {'resources': resources} 11.16. Creating the control plane machines in GCP You must create the control plane machines in Google Cloud Platform (GCP) for your cluster to use. One way to create these machines is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your control plane machines, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure a GCP account. Generate the Ignition config files for your cluster. Create and configure a VPC and associated subnets in GCP. Create and configure networking and load balancers in GCP. Create control plane and compute roles. Create the bootstrap machine. Procedure Copy the template from the Deployment Manager template for control plane machines section of this topic and save it as 05_control_plane.py on your computer. This template describes the control plane machines that your cluster requires. Export the following variable required by the resource definition: USD export MASTER_IGNITION=`cat <installation_directory>/master.ign` Create a 05_control_plane.yaml resource definition file: USD cat <<EOF >05_control_plane.yaml imports: - path: 05_control_plane.py resources: - name: cluster-control-plane type: 05_control_plane.py properties: infra_id: 'USD{INFRA_ID}' 1 zones: 2 - 'USD{ZONE_0}' - 'USD{ZONE_1}' - 'USD{ZONE_2}' control_subnet: 'USD{CONTROL_SUBNET}' 3 image: 'USD{CLUSTER_IMAGE}' 4 machine_type: 'n1-standard-4' 5 root_volume_size: '128' service_account_email: 'USD{MASTER_SERVICE_ACCOUNT}' 6 ignition: 'USD{MASTER_IGNITION}' 7 EOF 1 infra_id is the INFRA_ID infrastructure name from the extraction step. 2 zones are the zones to deploy the control plane instances into, for example us-central1-a , us-central1-b , and us-central1-c . 3 control_subnet is the selfLink URL to the control subnet. 4 image is the selfLink URL to the RHCOS image. 5 machine_type is the machine type of the instance, for example n1-standard-4 . 6 service_account_email is the email address for the master service account that you created. 7 ignition is the contents of the master.ign file. Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-control-plane --config 05_control_plane.yaml The templates do not manage load balancer membership due to limitations of Deployment Manager, so you must add the control plane machines manually. Run the following commands to add the control plane machines to the appropriate instance groups: USD gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_0}-ig --zone=USD{ZONE_0} --instances=USD{INFRA_ID}-master-0 USD gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_1}-ig --zone=USD{ZONE_1} --instances=USD{INFRA_ID}-master-1 USD gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_2}-ig --zone=USD{ZONE_2} --instances=USD{INFRA_ID}-master-2 For an external cluster, you must also run the following commands to add the control plane machines to the target pools: USD gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone="USD{ZONE_0}" --instances=USD{INFRA_ID}-master-0 USD gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone="USD{ZONE_1}" --instances=USD{INFRA_ID}-master-1 USD gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone="USD{ZONE_2}" --instances=USD{INFRA_ID}-master-2 11.16.1. Deployment Manager template for control plane machines You can use the following Deployment Manager template to deploy the control plane machines that you need for your OpenShift Container Platform cluster: Example 11.31. 05_control_plane.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-master-0', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][0] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][0] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][0] } }, { 'name': context.properties['infra_id'] + '-master-1', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][1] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][1] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][1] } }, { 'name': context.properties['infra_id'] + '-master-2', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][2] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][2] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][2] } }] return {'resources': resources} 11.17. Wait for bootstrap completion and remove bootstrap resources in GCP After you create all of the required infrastructure in Google Cloud Platform (GCP), wait for the bootstrap process to complete on the machines that you provisioned by using the Ignition config files that you generated with the installation program. Prerequisites Configure a GCP account. Generate the Ignition config files for your cluster. Create and configure a VPC and associated subnets in GCP. Create and configure networking and load balancers in GCP. Create control plane and compute roles. Create the bootstrap machine. Create the control plane machines. Procedure Change to the directory that contains the installation program and run the following command: USD ./openshift-install wait-for bootstrap-complete --dir <installation_directory> \ 1 --log-level info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . If the command exits without a FATAL warning, your production control plane has initialized. Delete the bootstrap resources: USD gcloud compute backend-services remove-backend USD{INFRA_ID}-api-internal-backend-service --region=USD{REGION} --instance-group=USD{INFRA_ID}-bootstrap-ig --instance-group-zone=USD{ZONE_0} USD gsutil rm gs://USD{INFRA_ID}-bootstrap-ignition/bootstrap.ign USD gsutil rb gs://USD{INFRA_ID}-bootstrap-ignition USD gcloud deployment-manager deployments delete USD{INFRA_ID}-bootstrap 11.18. Creating additional worker machines in GCP You can create worker machines in Google Cloud Platform (GCP) for your cluster to use by launching individual instances discretely or by automated processes outside the cluster, such as auto scaling groups. You can also take advantage of the built-in cluster scaling mechanisms and the machine API in OpenShift Container Platform. In this example, you manually launch one instance by using the Deployment Manager template. Additional instances can be launched by including additional resources of type 06_worker.py in the file. Note If you do not use the provided Deployment Manager template to create your worker machines, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure a GCP account. Generate the Ignition config files for your cluster. Create and configure a VPC and associated subnets in GCP. Create and configure networking and load balancers in GCP. Create control plane and compute roles. Create the bootstrap machine. Create the control plane machines. Procedure Copy the template from the Deployment Manager template for worker machines section of this topic and save it as 06_worker.py on your computer. This template describes the worker machines that your cluster requires. Export the variables that the resource definition uses. Export the subnet that hosts the compute machines: USD export COMPUTE_SUBNET=(`gcloud compute networks subnets describe USD{INFRA_ID}-worker-subnet --region=USD{REGION} --format json | jq -r .selfLink`) Export the email address for your service account: USD export WORKER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter "email~^USD{INFRA_ID}-w@USD{PROJECT_NAME}." --format json | jq -r '.[0].email'`) Export the location of the compute machine Ignition config file: USD export WORKER_IGNITION=`cat <installation_directory>/worker.ign` Create a 06_worker.yaml resource definition file: USD cat <<EOF >06_worker.yaml imports: - path: 06_worker.py resources: - name: 'worker-0' 1 type: 06_worker.py properties: infra_id: 'USD{INFRA_ID}' 2 zone: 'USD{ZONE_0}' 3 compute_subnet: 'USD{COMPUTE_SUBNET}' 4 image: 'USD{CLUSTER_IMAGE}' 5 machine_type: 'n1-standard-4' 6 root_volume_size: '128' service_account_email: 'USD{WORKER_SERVICE_ACCOUNT}' 7 ignition: 'USD{WORKER_IGNITION}' 8 - name: 'worker-1' type: 06_worker.py properties: infra_id: 'USD{INFRA_ID}' 9 zone: 'USD{ZONE_1}' 10 compute_subnet: 'USD{COMPUTE_SUBNET}' 11 image: 'USD{CLUSTER_IMAGE}' 12 machine_type: 'n1-standard-4' 13 root_volume_size: '128' service_account_email: 'USD{WORKER_SERVICE_ACCOUNT}' 14 ignition: 'USD{WORKER_IGNITION}' 15 EOF 1 name is the name of the worker machine, for example worker-0 . 2 9 infra_id is the INFRA_ID infrastructure name from the extraction step. 3 10 zone is the zone to deploy the worker machine into, for example us-central1-a . 4 11 compute_subnet is the selfLink URL to the compute subnet. 5 12 image is the selfLink URL to the RHCOS image. 1 6 13 machine_type is the machine type of the instance, for example n1-standard-4 . 7 14 service_account_email is the email address for the worker service account that you created. 8 15 ignition is the contents of the worker.ign file. Optional: If you want to launch additional instances, include additional resources of type 06_worker.py in your 06_worker.yaml resource definition file. Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-worker --config 06_worker.yaml To use a GCP Marketplace image, specify the offer to use: OpenShift Container Platform: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-ocp-48-x86-64-202210040145 OpenShift Platform Plus: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-opp-48-x86-64-202206140145 OpenShift Kubernetes Engine: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-oke-48-x86-64-202206140145 11.18.1. Deployment Manager template for worker machines You can use the following Deployment Manager template to deploy the worker machines that you need for your OpenShift Container Platform cluster: Example 11.32. 06_worker.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-' + context.env['name'], 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zone'] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['compute_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-worker', ] }, 'zone': context.properties['zone'] } }] return {'resources': resources} 11.19. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.12. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.12 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 11.20. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 11.21. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.25.0 master-1 Ready master 63m v1.25.0 master-2 Ready master 64m v1.25.0 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.25.0 master-1 Ready master 73m v1.25.0 master-2 Ready master 74m v1.25.0 worker-0 Ready worker 11m v1.25.0 worker-1 Ready worker 11m v1.25.0 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 11.22. Optional: Adding the ingress DNS records If you removed the DNS zone configuration when creating Kubernetes manifests and generating Ignition configs, you must manually create DNS records that point at the ingress load balancer. You can create either a wildcard *.apps.{baseDomain}. or specific records. You can use A, CNAME, and other records per your requirements. Prerequisites Configure a GCP account. Remove the DNS Zone configuration when creating Kubernetes manifests and generating Ignition configs. Create and configure a VPC and associated subnets in GCP. Create and configure networking and load balancers in GCP. Create control plane and compute roles. Create the bootstrap machine. Create the control plane machines. Create the worker machines. Procedure Wait for the Ingress router to create a load balancer and populate the EXTERNAL-IP field: USD oc -n openshift-ingress get service router-default Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.18.154 35.233.157.184 80:32288/TCP,443:31215/TCP 98 Add the A record to your zones: To use A records: Export the variable for the router IP address: USD export ROUTER_IP=`oc -n openshift-ingress get service router-default --no-headers | awk '{print USD4}'` Add the A record to the private zones: USD if [ -f transaction.yaml ]; then rm transaction.yaml; fi USD gcloud dns record-sets transaction start --zone USD{INFRA_ID}-private-zone USD gcloud dns record-sets transaction add USD{ROUTER_IP} --name \*.apps.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 300 --type A --zone USD{INFRA_ID}-private-zone USD gcloud dns record-sets transaction execute --zone USD{INFRA_ID}-private-zone For an external cluster, also add the A record to the public zones: USD if [ -f transaction.yaml ]; then rm transaction.yaml; fi USD gcloud dns record-sets transaction start --zone USD{BASE_DOMAIN_ZONE_NAME} USD gcloud dns record-sets transaction add USD{ROUTER_IP} --name \*.apps.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 300 --type A --zone USD{BASE_DOMAIN_ZONE_NAME} USD gcloud dns record-sets transaction execute --zone USD{BASE_DOMAIN_ZONE_NAME} To add explicit domains instead of using a wildcard, create entries for each of the cluster's current routes: USD oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{"\n"}{end}{end}' routes Example output oauth-openshift.apps.your.cluster.domain.example.com console-openshift-console.apps.your.cluster.domain.example.com downloads-openshift-console.apps.your.cluster.domain.example.com alertmanager-main-openshift-monitoring.apps.your.cluster.domain.example.com prometheus-k8s-openshift-monitoring.apps.your.cluster.domain.example.com 11.23. Completing a GCP installation on user-provisioned infrastructure After you start the OpenShift Container Platform installation on Google Cloud Platform (GCP) user-provisioned infrastructure, you can monitor the cluster events until the cluster is ready. Prerequisites Deploy the bootstrap machine for an OpenShift Container Platform cluster on user-provisioned GCP infrastructure. Install the oc CLI and log in. Procedure Complete the cluster installation: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 Example output INFO Waiting up to 30m0s for the cluster to initialize... 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Observe the running state of your cluster. Run the following command to view the current cluster version and status: USD oc get clusterversion Example output NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version False True 24m Working towards 4.5.4: 99% complete Run the following command to view the Operators managed on the control plane by the Cluster Version Operator (CVO): USD oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.5.4 True False False 7m56s cloud-credential 4.5.4 True False False 31m cluster-autoscaler 4.5.4 True False False 16m console 4.5.4 True False False 10m csi-snapshot-controller 4.5.4 True False False 16m dns 4.5.4 True False False 22m etcd 4.5.4 False False False 25s image-registry 4.5.4 True False False 16m ingress 4.5.4 True False False 16m insights 4.5.4 True False False 17m kube-apiserver 4.5.4 True False False 19m kube-controller-manager 4.5.4 True False False 20m kube-scheduler 4.5.4 True False False 20m kube-storage-version-migrator 4.5.4 True False False 16m machine-api 4.5.4 True False False 22m machine-config 4.5.4 True False False 22m marketplace 4.5.4 True False False 16m monitoring 4.5.4 True False False 10m network 4.5.4 True False False 23m node-tuning 4.5.4 True False False 23m openshift-apiserver 4.5.4 True False False 17m openshift-controller-manager 4.5.4 True False False 15m openshift-samples 4.5.4 True False False 16m operator-lifecycle-manager 4.5.4 True False False 22m operator-lifecycle-manager-catalog 4.5.4 True False False 22m operator-lifecycle-manager-packageserver 4.5.4 True False False 18m service-ca 4.5.4 True False False 23m service-catalog-apiserver 4.5.4 True False False 23m service-catalog-controller-manager 4.5.4 True False False 23m storage 4.5.4 True False False 17m Run the following command to view your cluster pods: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE kube-system etcd-member-ip-10-0-3-111.us-east-2.compute.internal 1/1 Running 0 35m kube-system etcd-member-ip-10-0-3-239.us-east-2.compute.internal 1/1 Running 0 37m kube-system etcd-member-ip-10-0-3-24.us-east-2.compute.internal 1/1 Running 0 35m openshift-apiserver-operator openshift-apiserver-operator-6d6674f4f4-h7t2t 1/1 Running 1 37m openshift-apiserver apiserver-fm48r 1/1 Running 0 30m openshift-apiserver apiserver-fxkvv 1/1 Running 0 29m openshift-apiserver apiserver-q85nm 1/1 Running 0 29m ... openshift-service-ca-operator openshift-service-ca-operator-66ff6dc6cd-9r257 1/1 Running 0 37m openshift-service-ca apiservice-cabundle-injector-695b6bcbc-cl5hm 1/1 Running 0 35m openshift-service-ca configmap-cabundle-injector-8498544d7-25qn6 1/1 Running 0 35m openshift-service-ca service-serving-cert-signer-6445fc9c6-wqdqn 1/1 Running 0 35m openshift-service-catalog-apiserver-operator openshift-service-catalog-apiserver-operator-549f44668b-b5q2w 1/1 Running 0 32m openshift-service-catalog-controller-manager-operator openshift-service-catalog-controller-manager-operator-b78cr2lnm 1/1 Running 0 31m When the current cluster version is AVAILABLE , the installation is complete. 11.24. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.12, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 11.25. steps Customize your cluster . If necessary, you can opt out of remote health reporting . Configure Global Access for an Ingress Controller on GCP .
|
[
"mkdir USDHOME/clusterconfig",
"openshift-install create manifests --dir USDHOME/clusterconfig",
"? SSH Public Key INFO Credentials loaded from the \"myprofile\" profile in file \"/home/myuser/.aws/credentials\" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift",
"ls USDHOME/clusterconfig/openshift/",
"99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml",
"variant: openshift version: 4.12.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_id> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true",
"butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml",
"openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign",
"./openshift-install create install-config --dir <installation_directory> 1",
"compute: - hyperthreading: Enabled name: worker platform: {} replicas: 0 1",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"./openshift-install create manifests --dir <installation_directory> 1",
"rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml",
"rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml",
"apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {}",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"jq -r .infraID <installation_directory>/metadata.json 1",
"openshift-vw9j6 1",
"export BASE_DOMAIN='<base_domain>' export BASE_DOMAIN_ZONE_NAME='<base_domain_zone_name>' export NETWORK_CIDR='10.0.0.0/16' export MASTER_SUBNET_CIDR='10.0.0.0/17' export WORKER_SUBNET_CIDR='10.0.128.0/17' export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 export CLUSTER_NAME=`jq -r .clusterName <installation_directory>/metadata.json` export INFRA_ID=`jq -r .infraID <installation_directory>/metadata.json` export PROJECT_NAME=`jq -r .gcp.projectID <installation_directory>/metadata.json` export REGION=`jq -r .gcp.region <installation_directory>/metadata.json`",
"cat <<EOF >01_vpc.yaml imports: - path: 01_vpc.py resources: - name: cluster-vpc type: 01_vpc.py properties: infra_id: 'USD{INFRA_ID}' 1 region: 'USD{REGION}' 2 master_subnet_cidr: 'USD{MASTER_SUBNET_CIDR}' 3 worker_subnet_cidr: 'USD{WORKER_SUBNET_CIDR}' 4 EOF",
"gcloud deployment-manager deployments create USD{INFRA_ID}-vpc --config 01_vpc.yaml",
"def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-network', 'type': 'compute.v1.network', 'properties': { 'region': context.properties['region'], 'autoCreateSubnetworks': False } }, { 'name': context.properties['infra_id'] + '-master-subnet', 'type': 'compute.v1.subnetwork', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'ipCidrRange': context.properties['master_subnet_cidr'] } }, { 'name': context.properties['infra_id'] + '-worker-subnet', 'type': 'compute.v1.subnetwork', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'ipCidrRange': context.properties['worker_subnet_cidr'] } }, { 'name': context.properties['infra_id'] + '-router', 'type': 'compute.v1.router', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'nats': [{ 'name': context.properties['infra_id'] + '-nat-master', 'natIpAllocateOption': 'AUTO_ONLY', 'minPortsPerVm': 7168, 'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS', 'subnetworks': [{ 'name': 'USD(ref.' + context.properties['infra_id'] + '-master-subnet.selfLink)', 'sourceIpRangesToNat': ['ALL_IP_RANGES'] }] }, { 'name': context.properties['infra_id'] + '-nat-worker', 'natIpAllocateOption': 'AUTO_ONLY', 'minPortsPerVm': 512, 'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS', 'subnetworks': [{ 'name': 'USD(ref.' + context.properties['infra_id'] + '-worker-subnet.selfLink)', 'sourceIpRangesToNat': ['ALL_IP_RANGES'] }] }] } }] return {'resources': resources}",
"export CLUSTER_NETWORK=(`gcloud compute networks describe USD{INFRA_ID}-network --format json | jq -r .selfLink`)",
"export CONTROL_SUBNET=(`gcloud compute networks subnets describe USD{INFRA_ID}-master-subnet --region=USD{REGION} --format json | jq -r .selfLink`)",
"export ZONE_0=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[0] | cut -d \"/\" -f9`)",
"export ZONE_1=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[1] | cut -d \"/\" -f9`)",
"export ZONE_2=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[2] | cut -d \"/\" -f9`)",
"cat <<EOF >02_infra.yaml imports: - path: 02_lb_ext.py - path: 02_lb_int.py 1 resources: - name: cluster-lb-ext 2 type: 02_lb_ext.py properties: infra_id: 'USD{INFRA_ID}' 3 region: 'USD{REGION}' 4 - name: cluster-lb-int type: 02_lb_int.py properties: cluster_network: 'USD{CLUSTER_NETWORK}' control_subnet: 'USD{CONTROL_SUBNET}' 5 infra_id: 'USD{INFRA_ID}' region: 'USD{REGION}' zones: 6 - 'USD{ZONE_0}' - 'USD{ZONE_1}' - 'USD{ZONE_2}' EOF",
"gcloud deployment-manager deployments create USD{INFRA_ID}-infra --config 02_infra.yaml",
"export CLUSTER_IP=(`gcloud compute addresses describe USD{INFRA_ID}-cluster-ip --region=USD{REGION} --format json | jq -r .address`)",
"export CLUSTER_PUBLIC_IP=(`gcloud compute addresses describe USD{INFRA_ID}-cluster-public-ip --region=USD{REGION} --format json | jq -r .address`)",
"def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-cluster-public-ip', 'type': 'compute.v1.address', 'properties': { 'region': context.properties['region'] } }, { # Refer to docs/dev/kube-apiserver-health-check.md on how to correctly setup health check probe for kube-apiserver 'name': context.properties['infra_id'] + '-api-http-health-check', 'type': 'compute.v1.httpHealthCheck', 'properties': { 'port': 6080, 'requestPath': '/readyz' } }, { 'name': context.properties['infra_id'] + '-api-target-pool', 'type': 'compute.v1.targetPool', 'properties': { 'region': context.properties['region'], 'healthChecks': ['USD(ref.' + context.properties['infra_id'] + '-api-http-health-check.selfLink)'], 'instances': [] } }, { 'name': context.properties['infra_id'] + '-api-forwarding-rule', 'type': 'compute.v1.forwardingRule', 'properties': { 'region': context.properties['region'], 'IPAddress': 'USD(ref.' + context.properties['infra_id'] + '-cluster-public-ip.selfLink)', 'target': 'USD(ref.' + context.properties['infra_id'] + '-api-target-pool.selfLink)', 'portRange': '6443' } }] return {'resources': resources}",
"def GenerateConfig(context): backends = [] for zone in context.properties['zones']: backends.append({ 'group': 'USD(ref.' + context.properties['infra_id'] + '-master-' + zone + '-ig' + '.selfLink)' }) resources = [{ 'name': context.properties['infra_id'] + '-cluster-ip', 'type': 'compute.v1.address', 'properties': { 'addressType': 'INTERNAL', 'region': context.properties['region'], 'subnetwork': context.properties['control_subnet'] } }, { # Refer to docs/dev/kube-apiserver-health-check.md on how to correctly setup health check probe for kube-apiserver 'name': context.properties['infra_id'] + '-api-internal-health-check', 'type': 'compute.v1.healthCheck', 'properties': { 'httpsHealthCheck': { 'port': 6443, 'requestPath': '/readyz' }, 'type': \"HTTPS\" } }, { 'name': context.properties['infra_id'] + '-api-internal-backend-service', 'type': 'compute.v1.regionBackendService', 'properties': { 'backends': backends, 'healthChecks': ['USD(ref.' + context.properties['infra_id'] + '-api-internal-health-check.selfLink)'], 'loadBalancingScheme': 'INTERNAL', 'region': context.properties['region'], 'protocol': 'TCP', 'timeoutSec': 120 } }, { 'name': context.properties['infra_id'] + '-api-internal-forwarding-rule', 'type': 'compute.v1.forwardingRule', 'properties': { 'backendService': 'USD(ref.' + context.properties['infra_id'] + '-api-internal-backend-service.selfLink)', 'IPAddress': 'USD(ref.' + context.properties['infra_id'] + '-cluster-ip.selfLink)', 'loadBalancingScheme': 'INTERNAL', 'ports': ['6443','22623'], 'region': context.properties['region'], 'subnetwork': context.properties['control_subnet'] } }] for zone in context.properties['zones']: resources.append({ 'name': context.properties['infra_id'] + '-master-' + zone + '-ig', 'type': 'compute.v1.instanceGroup', 'properties': { 'namedPorts': [ { 'name': 'ignition', 'port': 22623 }, { 'name': 'https', 'port': 6443 } ], 'network': context.properties['cluster_network'], 'zone': zone } }) return {'resources': resources}",
"cat <<EOF >02_dns.yaml imports: - path: 02_dns.py resources: - name: cluster-dns type: 02_dns.py properties: infra_id: 'USD{INFRA_ID}' 1 cluster_domain: 'USD{CLUSTER_NAME}.USD{BASE_DOMAIN}' 2 cluster_network: 'USD{CLUSTER_NETWORK}' 3 EOF",
"gcloud deployment-manager deployments create USD{INFRA_ID}-dns --config 02_dns.yaml",
"if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{INFRA_ID}-private-zone gcloud dns record-sets transaction add USD{CLUSTER_IP} --name api.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{INFRA_ID}-private-zone gcloud dns record-sets transaction add USD{CLUSTER_IP} --name api-int.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{INFRA_ID}-private-zone gcloud dns record-sets transaction execute --zone USD{INFRA_ID}-private-zone",
"if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{BASE_DOMAIN_ZONE_NAME} gcloud dns record-sets transaction add USD{CLUSTER_PUBLIC_IP} --name api.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{BASE_DOMAIN_ZONE_NAME} gcloud dns record-sets transaction execute --zone USD{BASE_DOMAIN_ZONE_NAME}",
"def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-private-zone', 'type': 'dns.v1.managedZone', 'properties': { 'description': '', 'dnsName': context.properties['cluster_domain'] + '.', 'visibility': 'private', 'privateVisibilityConfig': { 'networks': [{ 'networkUrl': context.properties['cluster_network'] }] } } }] return {'resources': resources}",
"cat <<EOF >03_firewall.yaml imports: - path: 03_firewall.py resources: - name: cluster-firewall type: 03_firewall.py properties: allowed_external_cidr: '0.0.0.0/0' 1 infra_id: 'USD{INFRA_ID}' 2 cluster_network: 'USD{CLUSTER_NETWORK}' 3 network_cidr: 'USD{NETWORK_CIDR}' 4 EOF",
"gcloud deployment-manager deployments create USD{INFRA_ID}-firewall --config 03_firewall.yaml",
"def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-bootstrap-in-ssh', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['22'] }], 'sourceRanges': [context.properties['allowed_external_cidr']], 'targetTags': [context.properties['infra_id'] + '-bootstrap'] } }, { 'name': context.properties['infra_id'] + '-api', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['6443'] }], 'sourceRanges': [context.properties['allowed_external_cidr']], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-health-checks', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['6080', '6443', '22624'] }], 'sourceRanges': ['35.191.0.0/16', '130.211.0.0/22', '209.85.152.0/22', '209.85.204.0/22'], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-etcd', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['2379-2380'] }], 'sourceTags': [context.properties['infra_id'] + '-master'], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-control-plane', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['10257'] },{ 'IPProtocol': 'tcp', 'ports': ['10259'] },{ 'IPProtocol': 'tcp', 'ports': ['22623'] }], 'sourceTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-internal-network', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'icmp' },{ 'IPProtocol': 'tcp', 'ports': ['22'] }], 'sourceRanges': [context.properties['network_cidr']], 'targetTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ] } }, { 'name': context.properties['infra_id'] + '-internal-cluster', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'udp', 'ports': ['4789', '6081'] },{ 'IPProtocol': 'udp', 'ports': ['500', '4500'] },{ 'IPProtocol': 'esp', },{ 'IPProtocol': 'tcp', 'ports': ['9000-9999'] },{ 'IPProtocol': 'udp', 'ports': ['9000-9999'] },{ 'IPProtocol': 'tcp', 'ports': ['10250'] },{ 'IPProtocol': 'tcp', 'ports': ['30000-32767'] },{ 'IPProtocol': 'udp', 'ports': ['30000-32767'] }], 'sourceTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ], 'targetTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ] } }] return {'resources': resources}",
"cat <<EOF >03_iam.yaml imports: - path: 03_iam.py resources: - name: cluster-iam type: 03_iam.py properties: infra_id: 'USD{INFRA_ID}' 1 EOF",
"gcloud deployment-manager deployments create USD{INFRA_ID}-iam --config 03_iam.yaml",
"export MASTER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter \"email~^USD{INFRA_ID}-m@USD{PROJECT_NAME}.\" --format json | jq -r '.[0].email'`)",
"export WORKER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter \"email~^USD{INFRA_ID}-w@USD{PROJECT_NAME}.\" --format json | jq -r '.[0].email'`)",
"export COMPUTE_SUBNET=(`gcloud compute networks subnets describe USD{INFRA_ID}-worker-subnet --region=USD{REGION} --format json | jq -r .selfLink`)",
"gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.instanceAdmin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.networkAdmin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.securityAdmin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/iam.serviceAccountUser\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/storage.admin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{WORKER_SERVICE_ACCOUNT}\" --role \"roles/compute.viewer\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{WORKER_SERVICE_ACCOUNT}\" --role \"roles/storage.admin\"",
"gcloud iam service-accounts keys create service-account-key.json --iam-account=USD{MASTER_SERVICE_ACCOUNT}",
"def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-master-node-sa', 'type': 'iam.v1.serviceAccount', 'properties': { 'accountId': context.properties['infra_id'] + '-m', 'displayName': context.properties['infra_id'] + '-master-node' } }, { 'name': context.properties['infra_id'] + '-worker-node-sa', 'type': 'iam.v1.serviceAccount', 'properties': { 'accountId': context.properties['infra_id'] + '-w', 'displayName': context.properties['infra_id'] + '-worker-node' } }] return {'resources': resources}",
"gsutil mb gs://<bucket_name>",
"gsutil cp <downloaded_image_file_path>/rhcos-<version>-x86_64-gcp.x86_64.tar.gz gs://<bucket_name>",
"export IMAGE_SOURCE=gs://<bucket_name>/rhcos-<version>-x86_64-gcp.x86_64.tar.gz",
"gcloud compute images create \"USD{INFRA_ID}-rhcos-image\" --source-uri=\"USD{IMAGE_SOURCE}\"",
"export CLUSTER_IMAGE=(`gcloud compute images describe USD{INFRA_ID}-rhcos-image --format json | jq -r .selfLink`)",
"gsutil mb gs://USD{INFRA_ID}-bootstrap-ignition",
"gsutil cp <installation_directory>/bootstrap.ign gs://USD{INFRA_ID}-bootstrap-ignition/",
"export BOOTSTRAP_IGN=`gsutil signurl -d 1h service-account-key.json gs://USD{INFRA_ID}-bootstrap-ignition/bootstrap.ign | grep \"^gs:\" | awk '{print USD5}'`",
"cat <<EOF >04_bootstrap.yaml imports: - path: 04_bootstrap.py resources: - name: cluster-bootstrap type: 04_bootstrap.py properties: infra_id: 'USD{INFRA_ID}' 1 region: 'USD{REGION}' 2 zone: 'USD{ZONE_0}' 3 cluster_network: 'USD{CLUSTER_NETWORK}' 4 control_subnet: 'USD{CONTROL_SUBNET}' 5 image: 'USD{CLUSTER_IMAGE}' 6 machine_type: 'n1-standard-4' 7 root_volume_size: '128' 8 bootstrap_ign: 'USD{BOOTSTRAP_IGN}' 9 EOF",
"gcloud deployment-manager deployments create USD{INFRA_ID}-bootstrap --config 04_bootstrap.yaml",
"gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-bootstrap-ig --zone=USD{ZONE_0} --instances=USD{INFRA_ID}-bootstrap",
"gcloud compute backend-services add-backend USD{INFRA_ID}-api-internal-backend-service --region=USD{REGION} --instance-group=USD{INFRA_ID}-bootstrap-ig --instance-group-zone=USD{ZONE_0}",
"def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-bootstrap-public-ip', 'type': 'compute.v1.address', 'properties': { 'region': context.properties['region'] } }, { 'name': context.properties['infra_id'] + '-bootstrap', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zone'] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': '{\"ignition\":{\"config\":{\"replace\":{\"source\":\"' + context.properties['bootstrap_ign'] + '\"}},\"version\":\"3.2.0\"}}', }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'], 'accessConfigs': [{ 'natIP': 'USD(ref.' + context.properties['infra_id'] + '-bootstrap-public-ip.address)' }] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-bootstrap' ] }, 'zone': context.properties['zone'] } }, { 'name': context.properties['infra_id'] + '-bootstrap-ig', 'type': 'compute.v1.instanceGroup', 'properties': { 'namedPorts': [ { 'name': 'ignition', 'port': 22623 }, { 'name': 'https', 'port': 6443 } ], 'network': context.properties['cluster_network'], 'zone': context.properties['zone'] } }] return {'resources': resources}",
"export MASTER_IGNITION=`cat <installation_directory>/master.ign`",
"cat <<EOF >05_control_plane.yaml imports: - path: 05_control_plane.py resources: - name: cluster-control-plane type: 05_control_plane.py properties: infra_id: 'USD{INFRA_ID}' 1 zones: 2 - 'USD{ZONE_0}' - 'USD{ZONE_1}' - 'USD{ZONE_2}' control_subnet: 'USD{CONTROL_SUBNET}' 3 image: 'USD{CLUSTER_IMAGE}' 4 machine_type: 'n1-standard-4' 5 root_volume_size: '128' service_account_email: 'USD{MASTER_SERVICE_ACCOUNT}' 6 ignition: 'USD{MASTER_IGNITION}' 7 EOF",
"gcloud deployment-manager deployments create USD{INFRA_ID}-control-plane --config 05_control_plane.yaml",
"gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_0}-ig --zone=USD{ZONE_0} --instances=USD{INFRA_ID}-master-0",
"gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_1}-ig --zone=USD{ZONE_1} --instances=USD{INFRA_ID}-master-1",
"gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_2}-ig --zone=USD{ZONE_2} --instances=USD{INFRA_ID}-master-2",
"gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone=\"USD{ZONE_0}\" --instances=USD{INFRA_ID}-master-0",
"gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone=\"USD{ZONE_1}\" --instances=USD{INFRA_ID}-master-1",
"gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone=\"USD{ZONE_2}\" --instances=USD{INFRA_ID}-master-2",
"def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-master-0', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][0] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][0] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][0] } }, { 'name': context.properties['infra_id'] + '-master-1', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][1] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][1] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][1] } }, { 'name': context.properties['infra_id'] + '-master-2', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][2] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][2] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][2] } }] return {'resources': resources}",
"./openshift-install wait-for bootstrap-complete --dir <installation_directory> \\ 1 --log-level info 2",
"gcloud compute backend-services remove-backend USD{INFRA_ID}-api-internal-backend-service --region=USD{REGION} --instance-group=USD{INFRA_ID}-bootstrap-ig --instance-group-zone=USD{ZONE_0}",
"gsutil rm gs://USD{INFRA_ID}-bootstrap-ignition/bootstrap.ign",
"gsutil rb gs://USD{INFRA_ID}-bootstrap-ignition",
"gcloud deployment-manager deployments delete USD{INFRA_ID}-bootstrap",
"export COMPUTE_SUBNET=(`gcloud compute networks subnets describe USD{INFRA_ID}-worker-subnet --region=USD{REGION} --format json | jq -r .selfLink`)",
"export WORKER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter \"email~^USD{INFRA_ID}-w@USD{PROJECT_NAME}.\" --format json | jq -r '.[0].email'`)",
"export WORKER_IGNITION=`cat <installation_directory>/worker.ign`",
"cat <<EOF >06_worker.yaml imports: - path: 06_worker.py resources: - name: 'worker-0' 1 type: 06_worker.py properties: infra_id: 'USD{INFRA_ID}' 2 zone: 'USD{ZONE_0}' 3 compute_subnet: 'USD{COMPUTE_SUBNET}' 4 image: 'USD{CLUSTER_IMAGE}' 5 machine_type: 'n1-standard-4' 6 root_volume_size: '128' service_account_email: 'USD{WORKER_SERVICE_ACCOUNT}' 7 ignition: 'USD{WORKER_IGNITION}' 8 - name: 'worker-1' type: 06_worker.py properties: infra_id: 'USD{INFRA_ID}' 9 zone: 'USD{ZONE_1}' 10 compute_subnet: 'USD{COMPUTE_SUBNET}' 11 image: 'USD{CLUSTER_IMAGE}' 12 machine_type: 'n1-standard-4' 13 root_volume_size: '128' service_account_email: 'USD{WORKER_SERVICE_ACCOUNT}' 14 ignition: 'USD{WORKER_IGNITION}' 15 EOF",
"gcloud deployment-manager deployments create USD{INFRA_ID}-worker --config 06_worker.yaml",
"def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-' + context.env['name'], 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zone'] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['compute_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-worker', ] }, 'zone': context.properties['zone'] } }] return {'resources': resources}",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.25.0 master-1 Ready master 63m v1.25.0 master-2 Ready master 64m v1.25.0",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.25.0 master-1 Ready master 73m v1.25.0 master-2 Ready master 74m v1.25.0 worker-0 Ready worker 11m v1.25.0 worker-1 Ready worker 11m v1.25.0",
"oc -n openshift-ingress get service router-default",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.18.154 35.233.157.184 80:32288/TCP,443:31215/TCP 98",
"export ROUTER_IP=`oc -n openshift-ingress get service router-default --no-headers | awk '{print USD4}'`",
"if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{INFRA_ID}-private-zone gcloud dns record-sets transaction add USD{ROUTER_IP} --name \\*.apps.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 300 --type A --zone USD{INFRA_ID}-private-zone gcloud dns record-sets transaction execute --zone USD{INFRA_ID}-private-zone",
"if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{BASE_DOMAIN_ZONE_NAME} gcloud dns record-sets transaction add USD{ROUTER_IP} --name \\*.apps.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 300 --type A --zone USD{BASE_DOMAIN_ZONE_NAME} gcloud dns record-sets transaction execute --zone USD{BASE_DOMAIN_ZONE_NAME}",
"oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{\"\\n\"}{end}{end}' routes",
"oauth-openshift.apps.your.cluster.domain.example.com console-openshift-console.apps.your.cluster.domain.example.com downloads-openshift-console.apps.your.cluster.domain.example.com alertmanager-main-openshift-monitoring.apps.your.cluster.domain.example.com prometheus-k8s-openshift-monitoring.apps.your.cluster.domain.example.com",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get clusterversion",
"NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version False True 24m Working towards 4.5.4: 99% complete",
"oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.5.4 True False False 7m56s cloud-credential 4.5.4 True False False 31m cluster-autoscaler 4.5.4 True False False 16m console 4.5.4 True False False 10m csi-snapshot-controller 4.5.4 True False False 16m dns 4.5.4 True False False 22m etcd 4.5.4 False False False 25s image-registry 4.5.4 True False False 16m ingress 4.5.4 True False False 16m insights 4.5.4 True False False 17m kube-apiserver 4.5.4 True False False 19m kube-controller-manager 4.5.4 True False False 20m kube-scheduler 4.5.4 True False False 20m kube-storage-version-migrator 4.5.4 True False False 16m machine-api 4.5.4 True False False 22m machine-config 4.5.4 True False False 22m marketplace 4.5.4 True False False 16m monitoring 4.5.4 True False False 10m network 4.5.4 True False False 23m node-tuning 4.5.4 True False False 23m openshift-apiserver 4.5.4 True False False 17m openshift-controller-manager 4.5.4 True False False 15m openshift-samples 4.5.4 True False False 16m operator-lifecycle-manager 4.5.4 True False False 22m operator-lifecycle-manager-catalog 4.5.4 True False False 22m operator-lifecycle-manager-packageserver 4.5.4 True False False 18m service-ca 4.5.4 True False False 23m service-catalog-apiserver 4.5.4 True False False 23m service-catalog-controller-manager 4.5.4 True False False 23m storage 4.5.4 True False False 17m",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE kube-system etcd-member-ip-10-0-3-111.us-east-2.compute.internal 1/1 Running 0 35m kube-system etcd-member-ip-10-0-3-239.us-east-2.compute.internal 1/1 Running 0 37m kube-system etcd-member-ip-10-0-3-24.us-east-2.compute.internal 1/1 Running 0 35m openshift-apiserver-operator openshift-apiserver-operator-6d6674f4f4-h7t2t 1/1 Running 1 37m openshift-apiserver apiserver-fm48r 1/1 Running 0 30m openshift-apiserver apiserver-fxkvv 1/1 Running 0 29m openshift-apiserver apiserver-q85nm 1/1 Running 0 29m openshift-service-ca-operator openshift-service-ca-operator-66ff6dc6cd-9r257 1/1 Running 0 37m openshift-service-ca apiservice-cabundle-injector-695b6bcbc-cl5hm 1/1 Running 0 35m openshift-service-ca configmap-cabundle-injector-8498544d7-25qn6 1/1 Running 0 35m openshift-service-ca service-serving-cert-signer-6445fc9c6-wqdqn 1/1 Running 0 35m openshift-service-catalog-apiserver-operator openshift-service-catalog-apiserver-operator-549f44668b-b5q2w 1/1 Running 0 32m openshift-service-catalog-controller-manager-operator openshift-service-catalog-controller-manager-operator-b78cr2lnm 1/1 Running 0 31m"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/installing_on_gcp/installing-gcp-user-infra
|
Developing decision services in Red Hat Process Automation Manager
|
Developing decision services in Red Hat Process Automation Manager Red Hat Process Automation Manager 7.13
| null |
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_decision_services_in_red_hat_process_automation_manager/index
|
Appendix A. Configuring a Local Repository for Offline Red Hat Virtualization Manager Installation
|
Appendix A. Configuring a Local Repository for Offline Red Hat Virtualization Manager Installation To install Red Hat Virtualization Manager on a system that does not have a direct connection to the Content Delivery Network, download the required packages on a system that has Internet access, then create a repository that can be shared with the offline Manager machine. The system hosting the repository must be connected to the same network as the client systems where the packages are to be installed. Prerequisites A Red Hat Enterprise Linux 7 Server installed on a system that has access to the Content Delivery Network. This system downloads all the required packages, and distributes them to your offline system(s). A large amount of free disk space available. This procedure downloads a large number of packages, and requires up to 50GB of free disk space. Enable the Red Hat Virtualization Manager repositories on the online system: Enabling the Red Hat Virtualization Manager Repositories Register the system with Red Hat Subscription Manager, attach the Red Hat Virtualization Manager subscription, and enable Manager repositories. Procedure Register your system with the Content Delivery Network, entering your Customer Portal user name and password when prompted: Note If you are using an IPv6 network, use an IPv6 transition mechanism to access the Content Delivery Network and subscription manager. Find the Red Hat Virtualization Manager subscription pool and record the pool ID: Use the pool ID to attach the subscription to the system: Note To view currently attached subscriptions: To list all enabled repositories: Configure the repositories: Configuring the Offline Repository Servers that are not connected to the Internet can access software repositories on other systems using File Transfer Protocol (FTP). To create the FTP repository, install and configure vsftpd : Install the vsftpd package: Start the vsftpd service, and ensure the service starts on boot: Create a sub-directory inside the /var/ftp/pub/ directory. This is where the downloaded packages will be made available: Download packages from all configured software repositories to the rhvrepo directory. This includes repositories for all Content Delivery Network subscription pools attached to the system, and any locally configured repositories: This command downloads a large number of packages, and takes a long time to complete. The -l option enables yum plug-in support. Install the createrepo package: Create repository metadata for each of the sub-directories where packages were downloaded under /var/ftp/pub/rhvrepo : Create a repository file, and copy it to the /etc/yum.repos.d/ directory on the offline machine on which you will install the Manager. The configuration file can be created manually or with a script. Run the script below on the system hosting the repository, replacing ADDRESS in the baseurl with the IP address or FQDN of the system hosting the repository: #!/bin/sh REPOFILE="/etc/yum.repos.d/rhev.repo" echo -e " " > USDREPOFILE for DIR in USD(find /var/ftp/pub/rhvrepo -maxdepth 1 -mindepth 1 -type d); do echo -e "[USD(basename USDDIR)]" >> USDREPOFILE echo -e "name=USD(basename USDDIR)" >> USDREPOFILE echo -e "baseurl=ftp://_ADDRESS_/pub/rhvrepo/`basename USDDIR`" >> USDREPOFILE echo -e "enabled=1" >> USDREPOFILE echo -e "gpgcheck=0" >> USDREPOFILE echo -e "\n" >> USDREPOFILE done Return to Section 3.4, "Installing and Configuring the Red Hat Virtualization Manager" . Packages are installed from the local repository, instead of from the Content Delivery Network.
|
[
"subscription-manager register",
"subscription-manager list --available",
"subscription-manager attach --pool= pool_id",
"subscription-manager list --consumed",
"yum repolist",
"subscription-manager repos --disable='*' --enable=rhel-7-server-rpms --enable=rhel-7-server-supplementary-rpms --enable=rhel-7-server-rhv-4.3-manager-rpms --enable=rhel-7-server-rhv-4-manager-tools-rpms --enable=rhel-7-server-ansible-2.9-rpms --enable=jb-eap-7.2-for-rhel-7-server-rpms",
"yum install vsftpd",
"systemctl start vsftpd.service systemctl enable vsftpd.service",
"mkdir /var/ftp/pub/rhvrepo",
"reposync -l -p /var/ftp/pub/rhvrepo",
"yum install createrepo",
"for DIR in USD(find /var/ftp/pub/rhvrepo -maxdepth 1 -mindepth 1 -type d); do createrepo USDDIR; done",
"#!/bin/sh REPOFILE=\"/etc/yum.repos.d/rhev.repo\" echo -e \" \" > USDREPOFILE for DIR in USD(find /var/ftp/pub/rhvrepo -maxdepth 1 -mindepth 1 -type d); do echo -e \"[USD(basename USDDIR)]\" >> USDREPOFILE echo -e \"name=USD(basename USDDIR)\" >> USDREPOFILE echo -e \"baseurl=ftp://_ADDRESS_/pub/rhvrepo/`basename USDDIR`\" >> USDREPOFILE echo -e \"enabled=1\" >> USDREPOFILE echo -e \"gpgcheck=0\" >> USDREPOFILE echo -e \"\\n\" >> USDREPOFILE done"
] |
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/installing_red_hat_virtualization_as_a_standalone_manager_with_remote_databases/configuring_an_offline_repository_for_red_hat_virtualization_manager_installation_sm_remotedb_deploy
|
function::tcpmib_local_port
|
function::tcpmib_local_port Name function::tcpmib_local_port - Get the local port Synopsis Arguments sk pointer to a struct inet_sock Description Returns the sport from a struct inet_sock in host order.
|
[
"tcpmib_local_port:long(sk:long)"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-tcpmib-local-port
|
Appendix A. Using your subscription
|
Appendix A. Using your subscription AMQ Streams is provided through a software subscription. To manage your subscriptions, access your account at the Red Hat Customer Portal. Accessing Your Account Go to access.redhat.com . If you do not already have an account, create one. Log in to your account. Activating a Subscription Go to access.redhat.com . Navigate to My Subscriptions . Navigate to Activate a subscription and enter your 16-digit activation number. Downloading Zip and Tar Files To access zip or tar files, use the customer portal to find the relevant files for download. If you are using RPM packages, this step is not required. Open a browser and log in to the Red Hat Customer Portal Product Downloads page at access.redhat.com/downloads . Locate the Red Hat AMQ Streams entries in the INTEGRATION AND AUTOMATION category. Select the desired AMQ Streams product. The Software Downloads page opens. Click the Download link for your component. Revised on 2021-12-14 20:09:30 UTC
| null |
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_amq_streams_on_openshift/using_your_subscription
|
Chapter 7. The X Window System
|
Chapter 7. The X Window System While the heart of Red Hat Enterprise Linux is the kernel, for many users, the face of the operating system is the graphical environment provided by the X Window System , also called X . Various windowing environments have existed in the UNIX TM world for decades, predating many of the current mainstream operating systems. Through the years, X has become the dominant graphical environment for UNIX-like operating systems. The graphical environment for Red Hat Enterprise Linux is supplied by the X.Org Foundation , an open source consortium created to manage development and strategy for the X Window System and related technologies. X.Org is a large scale, rapidly developing project with hundreds of developers around the world. It features a wide degree of support for a variety of hardware devices and architectures, and can run on a variety of different operating systems and platforms. This release for Red Hat Enterprise Linux specifically includes the X11R6.8 release of the X Window System. The X Window System uses a client-server architecture. The X server (the Xorg binary) listens for connections from X client applications via a network or local loopback interface. The server communicates with the hardware, such as the video card, monitor, keyboard, and mouse. X client applications exist in the user-space, creating a graphical user interface ( GUI ) for the user and passing user requests to the X server. 7.1. The X11R6.8 Release Red Hat Enterprise Linux 4.5.0 uses the X11R6.8 release as the base X Window System, which includes many cutting edge X.Org technology enhancements, such as 3D hardware acceleration support, the XRender extension for anti-aliased fonts, a modular driver-based design, and support for modern video hardware and input devices. Important Red Hat Enterprise Linux no longer provides the XFree86 TM server packages. Before upgrading to the latest version of Red Hat Enterprise Linux, be sure that the video card is compatible with the X11R6.8 release by checking the Red Hat Hardware Compatibility List located online at http://hardware.redhat.com/ . The files related to the X11R6.8 release reside primarily in two locations: /usr/X11R6/ Contains X server and some client applications, as well as X header files, libraries, modules, and documentation. /etc/X11/ Contains configuration files for X client and server applications. This includes configuration files for the X server itself, the fs font server, the X display managers, and many other base components. It is important to note that the configuration file for the newer Fontconfig-based font architecture is /etc/fonts/fonts.conf (which obsoletes the /etc/X11/XftConfig file). For more on configuring and adding fonts, refer to Section 7.4, "Fonts" . Because the X server performs advanced tasks on a wide array of hardware, it requires detailed configuration. The installation program installs and configures X automatically, unless the X11R6.8 release packages are not selected for installation. However, if the monitor or video card changes, X must to be reconfigured. The best way to do this is to use the X Configuration Tool ( system-config-display ). To start the X Configuration Tool while in an active X session, go to the Main Menu Button (on the Panel) => System Settings => Display . After using the X Configuration Tool during an X session, changes takes effect after logging out and logging back in. For more about using the X Configuration Tool , refer to the chapter titled X Window System Configuration in the System Administrators Guide . In some situations, reconfiguring the X server may require manually editing its configuration file, /etc/X11/xorg.conf . For information about the structure of this file, refer to Section 7.3, "X Server Configuration Files" .
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/ch-x
|
Chapter 32. Process instance migration
|
Chapter 32. Process instance migration Process instance migration (PIM) is a standalone service containing a user interface and a back-end. It is packaged as a Quarkus mutable JAR file. You can use the PIM service to define the migration between two different process definitions, known as a migration plan. The user can then apply the migration plan to the running process instance in a specific KIE Server. For more information about the PIM service, see Process Instance Migration Service in KIE (Drools, OptaPlanner and jBPM ) . 32.1. Installing the process instance migration service You can use the process instance migration (PIM) service to create, export and execute migration plans. The PIM service is provided through a GitHub repository. To install the PIM service, clone the GitHub repository, then run the service and access it in a web browser. Prerequisites You have defined processes in a backed up Red Hat Process Automation Manager development environment. Java Runtime Environment (JRE) version 11 or later is installed. Procedure Download the rhpam-7.13.5-add-ons.zip file from the Software Downloads page for Red Hat Process Automation Manager 7.13. Extract the rhpam-7.13.5-add-ons.zip file. Extract the rhpam-7.13.5-process-migration-service.zip file. Enter the following commands to create the database tables. Replace <user> with your user name and <host> with the name of the local host: Change directory to the process-migration directory. Use a text editor to create the servers.yaml configuration file with the following content and save in the process-migration directory. In this example, replace <user_name> and <password> with the credentials to log in to the KieServer. kieservers: - host: http://localhost:8080/kie-server/services/rest/server username: <user_name> password: <password> Use a text editor to create the datasource.yaml configuration file with the following content datasource.yaml and save in the process-migration directory. In this example, replace <user_name> and <password> with the credentials to log in to the database: quarkus: datasource: db-kind: postgresql jdbc: url: jdbc:postgresql://localhost:5432/rhpam7 username: <user_name> password: <password> Rebuild the quarkus-run.jar file to include the PostgreSQL driver: The output of this command should be similar to the following example: Run the quarkus-app JAR file: This command returns output similar to the following example: To access the Process Instance Migration application, enter http://localhost:8090/ in a web browser . When prompted, enter the user name admin and the password admin1! . The Process Instance Migration console appears. 32.2. Using Keystore Vault You can use the Quarkiverse File Vault extension to store credentials as keystore files and use the file method to use the keystore files with the Process Instance Migration (PIM) Keystore Vault. For more information about the Quarkiverse File Vault extension, see Quarkiverse File Vault . For more information about using the KeyStore Vault, see Using Keystore Vault on GitHub. For more information about credentials provision, see the Credentials Provider section in the Quarkus documentation. Note You can only use database and KIE Server related credentials for PIM configurations. Procedure To add passwords to anew or existing keystore file for the PIM Keystore Vault, use the keytool command. For example: Configure the PIM Keystore Vault to use the keystore file. For example: quarkus: file: vault: provider: pim: path: pimvault.p12 secret: USD{vault.storepassword} # This will be provided as a property Configure your application to use the credentials from the vault. For example: quarkus: datasource: credentials-provider: quarkus.file.vault.provider.pim.pimdb kieservers: - host: http://localhost:18080/kie-server/services/rest/server credentials-provider: quarkus.file.vault.provider.pim.kieserver To start PIM with the configured credentials, specify the credentials as an environment variable or as a system property. For example: As an environment variable: As a system property: 32.3. Creating a migration plan You can define the migration between two different process definitions, known as a migration plan, in the process instance migration (PIM) service web UI. Prerequisites You have defined processes in a backed up Red Hat Process Automation Manager development environment. The process instance migration service is running. Procedure Enter http://localhost:8080 in a web browser. Log in to the PIM service. In the upper right corner of the Process Instance Migration page, from the KIE Service list select the KIE Service you want to add a migration plan for. Click Add Plan . The Add Migration Plan Wizard window opens. In the Name field, enter a name for the migration plan. Optional: In the Description field, enter a description for the migration plan. Click . In the Source ContainerID field, enter the source container ID. In the Source ProcessId field, enter the source process ID. Click Copy Source To Target . In the Target ContainerID field, update the target container ID. Click Retrieve Definition from backend and click . From the Source Nodes list, select the source node you want to map. From the Target Nodes list, select the target node you want to map. If the Source Process Definition Diagram pane is not displayed, click Show Source Diagram . If the Target Process Definition Diagram pane is not displayed, click Show Target Diagram . Optional: To modify the view in the diagram panes, perform any of the following tasks: To select text, select the icon. To pan, select the icon. To zoom in, select the icon. To zoom out, select the icon. To fit to viewer, select the icon. Click Map these two nodes . Click . Optional: To export as a JSON file, click Export . In the Review & Submit tab, review the plan and click Submit Plan . Optional: To export as a JSON file, click Export . Review the response and click Close . 32.4. Editing a migration plan You can edit a migration plan in the process instance migration (PIM) service web UI. You can modify the migration plan name, description, specified nodes, and process instances. Prerequisites You have defined processes in a backed up Red Hat Process Automation Manager development environment. The PIM service is running. Procedure Enter http://localhost:8080 in a web browser. Log in to the PIM service. On the Process Instance Migration page, select the Edit Migration Plan icon on the row of the migration plan you want to edit. The Edit Migration Plan window opens. On each tab, modify the details you want to change. Click . Optional: To export as a JSON file, click Export . In the Review & Submit tab, review the plan and click Submit Plan . Optional: To export as a JSON file, click Export . Review the response and click Close . 32.5. Exporting a migration plan You can export migration plans as a JSON file using the process instance migration (PIM) service web UI. Prerequisites You have defined processes in a backed up Red Hat Process Automation Manager development environment. The PIM service is running. Procedure Enter http://localhost:8080 in a web browser. Log in to the PIM service. On the Process Instance Migration page, select the Export Migration Plan icon on the row of the migration plan you want to execute. The Export Migration Plan window opens. Review and click Export . 32.6. Executing a migration plan You can execute the migration plan in the process instance migration (PIM) service web UI. Prerequisites You have defined processes in a backed up Red Hat Process Automation Manager development environment. The PIM service is running. Procedure Enter http://localhost:8080 in a web browser. Log in to the PIM service. On the Process Instance Migration page, select the Execute Migration Plan icon on the row of the migration plan you want to execute. The Execute Migration Plan Wizard window opens. From the migration plan table, select the check box on the row of each running process instance you want to migrate, and click . In the Callback URL field, enter the callback URL. To the right of Run migration , perform one of the following tasks: To execute the migration immediately, select Now . To schedule the migration, select Schedule and in the text field, enter the date and time, for example 06/20/2019 10:00 PM . Click . Optional: To export as a JSON file, click Export . Click Execute Plan . Optional: To export as a JSON file, click Export . Check the response and click Close . 32.7. Deleting a migration plan You can delete a migration plan in the process instance migration (PIM) service web UI. Prerequisites You have defined processes in a backed up Red Hat Process Automation Manager development environment. The PIM service is running. Procedure Enter http://localhost:8080 in a web browser. Log in to the PIM service. On the Process Instance Migration page, select the Delete icon on the row of the migration plan you want to delete. The Delete Migration Plan window opens. Click Delete to confirm deletion.
|
[
"psql -U <user> -h <host> -d rhpam7 -f ~/process-migration/ddl-scripts/postgres/postgresql-quartz-schema.sql psql -U <user> -h <host> -d rhpam7 -f ~/process-migration/ddl-scripts/postgres/postgresql-pim-schema.sql",
"kieservers: - host: http://localhost:8080/kie-server/services/rest/server username: <user_name> password: <password>",
"quarkus: datasource: db-kind: postgresql jdbc: url: jdbc:postgresql://localhost:5432/rhpam7 username: <user_name> password: <password>",
"java -jar -Dquarkus.launch.rebuild=true -Dquarkus.datasource.db-kind=postgresql quarkus-app/quarkus-run.jar",
"INFO [io.qua.dep.QuarkusAugmentor] (main) Quarkus augmentation completed in 2657ms",
"java -jar -Dquarkus.http.port=8090 -Dquarkus.config.locations=servers.yaml,datasource.yaml quarkus-app/quarkus-run.jar",
"__ ____ __ _____ ___ __ ____ ______ --/ __ \\/ / / / _ | / _ \\/ //_/ / / / __/ -/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\\ --\\___\\_\\____/_/ |_/_/|_/_/|_|\\____/___/ 2022-03-11 13:04:18,261 INFO [org.fly.cor.int.lic.VersionPrinter] (main) Flyway Community Edition 7.14.0 by Redgate 2022-03-11 13:04:18,262 INFO [org.fly.cor.int.dat.bas.BaseDatabaseType] (main) Database: jdbc:postgresql://localhost:5432/rhpam7 (PostgreSQL 13.4) 2022-03-11 13:04:18,280 INFO [org.fly.cor.int.com.DbMigrate] (main) Current version of schema \"public\": 1.0 2022-03-11 13:04:18,281 INFO [org.fly.cor.int.com.DbMigrate] (main) Schema \"public\" is up to date. No migration necessary. 2022-03-11 13:04:18,601 INFO [org.qua.imp.jdb.JobStoreCMT] (main) Freed 0 triggers from 'acquired' / 'blocked' state. 2022-03-11 13:04:18,603 INFO [org.qua.imp.jdb.JobStoreCMT] (main) Recovering 0 jobs that were in-progress at the time of the last shut-down. 2022-03-11 13:04:18,603 INFO [org.qua.imp.jdb.JobStoreCMT] (main) Recovery complete. 2022-03-11 13:04:18,603 INFO [org.qua.imp.jdb.JobStoreCMT] (main) Removed 0 'complete' triggers. 2022-03-11 13:04:18,603 INFO [org.qua.imp.jdb.JobStoreCMT] (main) Removed 0 stale fired job entries. 2022-03-11 13:04:18,624 INFO [org.kie.ser.api.mar.MarshallerFactory] (main) Marshaller extensions init 2022-03-11 13:04:18,710 INFO [org.kie.pro.ser.imp.KieServiceImpl] (main) Loaded kie server configuration for: org.kie.processmigration.model.config.KieServersUSDKieServer9579928Impl@4b6b5352 2022-03-11 13:04:18,715 INFO [org.kie.pro.ser.RecoveryService] (main) Resuming ongoing migrations 2022-03-11 13:04:18,856 INFO [io.quarkus] (main) process-migration-service 7.59.0.Final-redhat-00006 on JVM (powered by Quarkus 2.2.3.Final-redhat-00013) started in 1.443s. Listening on: http://0.0.0.0:8090 2022-03-11 13:04:18,857 INFO [io.quarkus] (main) Profile prod activated. 2022-03-11 13:04:18,857 INFO [io.quarkus] (main) Installed features: [agroal, cdi, config-yaml, flyway, hibernate-orm, hibernate-orm-panache, jdbc-db2, jdbc-h2, jdbc-mariadb, jdbc-mssql, jdbc-mysql, jdbc-oracle, jdbc-postgresql, narayana-jta, quartz, resteasy, resteasy-jackson, scheduler, security, security-jdbc, security-ldap, security-properties-file, smallrye-context-propagation, smallrye-health]",
"keytool -importpass -alias pimdb -keystore pimvault.p12 -storepass password -storetype PKCS12 keytool -importpass -alias kieserver -keystore pimvault.p12 -storepass password -storetype PKCS12 keytool -importpass -alias cert -keystore pimvault.p12 -storepass password -storetype PKCS12 keytool -importpass -alias keystore -keystore pimvault.p12 -storepass password -storetype PKCS12 keytool -importpass -alias truststore -keystore pimvault.p12 -storepass password -storetype PKCS12",
"quarkus: file: vault: provider: pim: path: pimvault.p12 secret: USD{vault.storepassword} # This will be provided as a property",
"quarkus: datasource: credentials-provider: quarkus.file.vault.provider.pim.pimdb kieservers: - host: http://localhost:18080/kie-server/services/rest/server credentials-provider: quarkus.file.vault.provider.pim.kieserver",
"VAULT_STOREPASSWORD=mysecret java -jar quarkus-app/quarkus-run.jar",
"java -Dvault.storepassword=password -jar quarkus-app/quarkus-run.jar"
] |
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_process_services_in_red_hat_process_automation_manager/process-instance-migration-con
|
Chapter 30. Utility functions for using ansi control chars in logs
|
Chapter 30. Utility functions for using ansi control chars in logs Utility functions for logging using ansi control characters. This lets you manipulate the cursor position and character color output and attributes of log messages.
| null |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/ansi-dot-stp
|
15.2. Booting a Guest Using PXE
|
15.2. Booting a Guest Using PXE This section demonstrates how to boot a guest virtual machine with PXE. 15.2.1. Using Bridged Networking Procedure 15.2. Booting a guest using PXE and bridged networking Ensure bridging is enabled such that the PXE boot server is available on the network. Boot a guest virtual machine with PXE booting enabled. You can use the virt-install command to create a new virtual machine with PXE booting enabled, as shown in the following example command: Alternatively, ensure that the guest network is configured to use your bridged network, and that the XML guest configuration file has a <boot dev='network'/> element inside the <os> element, as shown in the following example:
|
[
"virt-install --pxe --network bridge=breth0 --prompt",
"<os> <type arch='x86_64' machine='rhel6.2.0'>hvm</type> <boot dev='network'/> <boot dev='hd'/> </os> <interface type='bridge'> <mac address='52:54:00:5a:ad:cb'/> <source bridge='breth0'/> <target dev='vnet0'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface>"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_host_configuration_and_guest_installation_guide/chap-virtualization_host_configuration_and_guest_installation_guide-libvirt_network_booting-boot_using_pxe
|
Chapter 5. Bug fixes
|
Chapter 5. Bug fixes This part describes bugs fixed in Red Hat Enterprise Linux 9.0 that have a significant impact on users. 5.1. Installer and image creation --leavebootorder no longer changes boot order Previously, using --leavebootorder for the bootloader kickstart command did not work correctly on UEFI systems and changed the boot order. This caused the installer to add RHEL at the top of the list of installed systems in the UEFI boot menu. This update fixes the problem and using --leavebootorder no longer changes the boot order in the boot loader. --leavebootorder is now supported on RHEL for UEFI systems. ( BZ#2025953 ) Anaconda sets a static hostname before running the %post scripts Previously, when Anaconda was setting the installer environment host name to the value from the kickstart configuration ( network --hostname ), it used to set a transient hostname. Some of the actions performed during %post script run, for example network device activation, were causing the host name reset to a value obtained by reverse dns . With this update, Anaconda now sets a static hostname of the installer environment to be stable during the run of kickstart %post scripts. ( BZ#2009403 ) Users can now specify user accounts in the RHEL for Edge Installer blueprint Previously, performing an update on your blueprint without a user account defined in the edge commit for the upgrade, such as adding a rpm package, would cause users to be locked out of a system, after an upgrade is applied. It caused users to redefine user accounts when upgrading an existing system.This issue has been fixed to allow users to specify user accounts in the RHEL for Edge Installer blueprint, which creates a user on the system at installation time, rather than having the user as part of the ostree commit. ( BZ#2060575 ) The basic graphics mode has been removed from the boot menu Previously, the basic graphics mode was used to install RHEL on hardware with an unsupported graphics card or to work around issues in graphic drivers that prevented starting the graphical interface. With this update, the option to install in a basic graphics mode has been removed from the installer boot menu. Use the VNC installation options for graphical installations on unsupported hardware or to work around driver bugs. For more information on installations using VNC, see the Performing a remote RHEL installation using VNC section. ( BZ#1961092 ) 5.2. Subscription management virt-who now works correctly with Hyper-V hosts Previously, when using virt-who to set up RHEL 9 virtual machines (VMs) on a Hyper-V hypervisor, virt-who did not properly communicate with the hypervisor, and the setup failed. This was because of a deprecated encryption method in the openssl package. With this update, the virt-who authentication mode for Hyper-V has been modified, and setting up RHEL 9 VMs on Hyper-V using virt-who now works correctly. Note that this also requires the hypervisor to use basic authentication mode. To enable this mode, use the following commands: ( BZ#2008215 ) 5.3. Software management Running createrepo_c --update on a modular repository now preserves modular metadata in it Previously, when running the createrepo_c --update command on an already existing modular repository without the original source of modular metadata present, the default policy was to remove all additional metadata including modular metadata from this repository, which, consequently, broke it. To preserve metadata, it required running the createrepo_c --update command with the additional --keep-all-metadata option. With this update, you can preserve modular metadata on a modular repository by running createrepo_c --update without any additional option. To remove additional metadata, you can use the new --discard-additional-metadata option. ( BZ#2055032 ) 5.4. Shells and command-line tools RHEL 9 provides libservicelog 1.1.19 RHEL 9 is distributed with libservicelog version 1.1.19. Notable bug fixes include: Fixed output alignment issue. Fixed segfault on servicelog_open() failure. (BZ#1869568) 5.5. Security Hardware optimization enabled in libgcrypt when in the FIPS mode Previously, the Federal Information Processing Standard (FIPS 140-2) did not allow using hardware optimization. Therefore, in versions of RHEL, the operation was disabled in the libgcrypt package when in the FIPS mode. RHEL 9 enables hardware optimization in FIPS mode, and as a result, all cryptographic operations are performed faster. ( BZ#1990059 ) crypto-policies now can disable ChaCha20 cipher usage Previously, the crypto-policies package used a wrong keyword to disable the ChaCha20 cipher in OpenSSL. Consequently, you could not disable ChaCha20 for the TLS 1.2 protocol in OpenSSL through crypto-policies . With this update, the -CHACHA20 keyword is used instead of -CHACHA20-POLY1305 . As a result, you now can use the cryptographic policies for disabling ChaCha20 cipher usage in OpenSSL for TLS 1.2 and TLS 1.3. ( BZ#2004207 ) 64-bit IBM Z systems no longer become unbootable when installing in FIPS mode Previously, the fips-mode-setup command with the --no-bootcfg option did not execute the zipl tool. Because fips-mode-setup regenerates the initial RAM disk ( initrd ), and the resulting system needs an update of zipl internal state to boot, this put 64-bit IBM Z systems into an unbootable state after installing in FIPS mode. With this update fips-mode-setup now executes zipl on 64-bit IBM Z systems even if invoked with --no-bootcfg , and as a result, the newly installed system boots successfully. (BZ#2013195) GNUTLS_NO_EXPLICIT_INIT no longer disables implicit library initialization Previously, the GNUTLS_NO_EXPLICIT_INIT environment variable disabled implicit library initialization. In RHEL 9, the GNUTLS_NO_IMPLICIT_INIT variable disables implicit library initialization instead. (BZ#1999639) OpenSSL-based applications now work correctly with the Turkish locale Because the OpenSSL library uses case-insensitive string comparison functions, OpenSSL-based applications did not work correctly with the Turkish locale, and omitted checks caused applications using this locale to crash. This update provides a patch to use the Portable Operating System Interface (POSIX) locale for case-insensitive string comparison. As a result, OpenSSL-based applications such as curl work correctly with the Turkish locale. ( BZ#2071631 ) kdump no longer crashes due to SELinux permissions The kdump crash recovery service requires additional SELinux permissions to start correctly. In versions, therefore, SELinux prevented kdump from working, kdump reported that it is not operational, and Access Vector Cache (AVC) denials were audited. In this version, the required permissions were added to selinux-policy and as a result, kdump works correctly and no AVC denial is audited. (BZ#1932752) The usbguard-selinux package is no longer dependent on usbguard Previously, the usbguard-selinux package was dependent on the usbguard package. This, in combination with other dependencies of these packages, led to file conflicts when installing usbguard . As a consequence, this prevented the installation of usbguard on certain systems. With this version, usbguard-selinux no longer depends on usbguard , and as a result, dnf can install usbguard correctly. ( BZ#1986785 ) dnf install and dnf update now work with fapolicyd in SELinux The fapolicyd-selinux package, which contains SELinux rules for fapolicyd, did not contain permissions to watch all files and directories. As a consequence, the fapolicyd-dnf-plugin did not work correctly, causing any dnf install and dnf update commands to make the system stop responding indefinitely. In this version, the permissions to watch any file type were added to fapolicyd-selinux . As a result, the fapolicyd-dnf-plugin works correctly and the commands dnf install and dnf update are operational. (BZ#1932225) Ambient capabilities are now applied correctly to non-root users As a safety measure, changing a UID (User Identifier) from root to non-root nullifies permitted, effective, and ambient sets of capabilities. However, the pam_cap.so module is unable to set ambient capabilities because a capability needs to be in both the permitted and the inheritable set to be in the ambient set. In addition, the permitted set gets nullified after changing the UID (for example by using the setuid utility), so the ambient capability cannot be set. To fix this problem, the pam_cap.so module now supports the keepcaps option, which allows a process to retain its permitted capabilities after changing the UID from root to non-root. The pam_cap.so module now also supports the defer option, which causes pam_cap.so to reapply ambient capabilities within a callback to pam_end() . This callback can be used by other applications after changing the UID. Therefore, if the su and login utilities are updated and PAM-compliant, you can now use pam_cap.so with the keepcaps and defer options to set ambient capabilities for non-root users. ( BZ#2037215 ) usbguard-notifier no longer logs too many error messages to the Journal Previously, the usbguard-notifier service did not have inter-process communication (IPC) permissions for connecting to the usbguard-daemon IPC interface. Consequently, usbguard-notifier failed to connect to the interface, and it wrote a corresponding error message to the Journal. Because usbguard-notifier started with the --wait option, which ensured that usbguard-notifier attempted to connect to the IPC interface each second after a connection failure, by default, the log contained an excessive amount of these messages soon. With this update, usbguard-notifier does not start with --wait by default. The service attempts to connect to the daemon only three times in the 1-second intervals. As a result, the log contains three such error messages at maximum. ( BZ#2009226 ) 5.6. Networking Wifi and 802.1x Ethernet connections profiles are now connecting properly Previously, many Wifi and 802.1x Ethernet connections profiles were not able to connect. This bug is now fixed. All the profiles are now connecting properly. Profiles that use legacy cryptographic algorithms still work but you need to manually enable the OpenSSL legacy provider. This is required, for example, when you use DES with MS-CHAPv2 and RC4 with TKIP. ( BZ#1975718 ) Afterburn no longer sets an overlong hostname in /etc/hostname The maximum length of a RHEL hostname is 64 characters. However, certain cloud providers use the Fully-Qualified Domain Name (FQDN) as the hostname, which can be up to 255 characters. Previously, the afterburn-hostname service wrote such an overlong hostname directly to the /etc/hostname file. The systemd service truncated the hostname to 64 characters, and NetworkManager derived an incorrect DNS search domain from the truncated value. With this fix, afterburn-hostname truncates hostnames at the first dot or 64 characters, whichever comes first. As a result, NetworkManager no longer sets invalid DNS search domains in /etc/resolv.conf . ( BZ#2008521 ) 5.7. Kernel modprobe loads out-of-tree kernel modules as expected The /etc/depmod.d/dist.conf configuration file provides a search order for the depmod utility. Based on the search order, depmod creates the modules.dep.bin file. This file lists module dependencies, which the modprobe utility uses for loading and unloading kernel modules and resolving module dependencies at the same time. Previously, /etc/depmod.d/dist.conf was missing. As a result, modprobe could not load some out-of-tree kernel modules. This update includes the /etc/depmod.d/dist.conf configuration file, which fixes the search order. As a result, modprobe loads out-of-tree kernel modules as expected. ( BZ#1985100 ) alsa-lib now correctly handles audio devices that use UCM A bug in the alsa-lib package caused incorrect parsing of the internal Use Case Manager (UCM) identifier. Consequently, some audio devices that used the UCM configuration were not detected or they did not function correctly. The problem occurred more often when the system used the pipewire sound service. With the new release of RHEL 9, the problem has been fixed by updating the alsa-lib library. ( BZ#2015863 ) 5.8. File systems and storage Protection uevents no longer cause reload failure of multipath devices Previously, when a read-only path device was rescanned, the kernel sent out two write protection uevents - one with the device set to read/write , and the following with the device set to read-only . Consequently, upon detection of the read/write uevent on a path device, multipathd tried to reload the multipath device, which caused a reload error message. With this update, multipathd now checks that all the paths are set to read/write before reloading a device read/write. As a result, multipathd no longer tries to reload read/write whenever a read-only device is rescanned. (BZ#2017979) device-mapper-multipath rebased to version 0.8.7 The device-mapper-multipath package has been upgraded to version 0.8.7, which provides multiple bug fixes and enhancements. Notable changes include: Fixed memory leaks in the multipath and kpartx commands. Fixed repeated trigger errors from the multipathd.socket unit file. Improved autoconfiguration of more devices, such as DELL SC Series arrays, EMC Invista and Symmetrix arrays (among others). ( BZ#2017592 ) 5.9. High availability and clusters Pacemaker attribute manager correctly determines remote node attributes, preventing unfencing loops Previously, Pacemaker's controller on a node might be elected the Designated Controller (DC) before its attribute manager learned an already-active remote node is remote. When this occurred, the node's scheduler would not see any of the remote node's node attributes. If the cluster used unfencing, this could result in an unfencing loop. With the fix, the attribute manager can now learn a remote node is remote by means of additional events, including the initial attribute sync at start-up. As a result, no unfencing loop occurs, regardless of which node is elected DC. ( BZ#1975388 ) 5.10. Compilers and development tools -Wsequence-point warning behavior fixed Previously, when compiling C++ programs with GCC, the -Wsequence-point warning option tried to warn about very long expressions, it could cause quadratic behavior and therefore significantly longer compilation time. With this update, -Wsequence-point doesn't attempt to warn about extremely large expressions and as a result, does not increase compilation time. (BZ#1481850) 5.11. Identity Management MS-CHAP authentication with the OpenSSL legacy provider Previously, FreeRADIUS authentication mechanisms that used MS-CHAP failed because they depended on MD4 hash functions, and MD4 has been deprecated in RHEL 9. With this update, you can authenticate FreeRADIUS users with MS-CHAP or MS-CHAPv2 if you enable the OpenSSL legacy provider. If you use the default OpenSSL provider, MS-CHAP and MS-CHAPv2 authentication fails and the following error message is displayed, indicating the fix: ( BZ#1978216 ) Running sudo commands no longer exports the KRB5CCNAME environment variable Previously, after running sudo commands, the environment variable KRB5CCNAME pointed to the Kerberos credential cache of the original user, which might not be accessible to the target user. As a result Kerberos related operations might fail as this cache is not accessible. With this update, running sudo commands no longer sets the KRB5CCNAME environment variable and the target user can use their default Kerberos credential cache. (BZ#1879869) SSSD correctly evaluates the default setting for the Kerberos keytab name in /etc/krb5.conf Previously, if you defined a non-standard location for your krb5.keytab file, SSSD did not use this location and used the default /etc/krb5.keytab location instead. As a result, when you tried to log into the system, the login failed as the /etc/krb5.keytab contained no entries. With this update, SSSD now evaluates the default_keytab_name variable in the /etc/krb5.conf and uses the location specified by this variable. SSSD only uses the default /etc/krb5.keytab location if the default_keytab_name variable is not set. (BZ#1737489) Authenticating to Directory Server in FIPS mode with passwords hashed with the PBKDF2 algorithm now works as expected When Directory Server runs in Federal Information Processing Standard (FIPS) mode, the PK11_ExtractKeyValue() function is not available. As a consequence, prior to this update, users with a password hashed with the password-based key derivation function 2 (PBKDF2) algorithm were not able to authenticate to the server when FIPS mode was enabled. With this update, Directory Server now uses the PK11_Decrypt() function to get the password hash data. As a result, authentication with passwords hashed with the PBKDF2 algorithm now works as expected. ( BZ#1779685 ) 5.12. Red Hat Enterprise Linux system roles The Networking system role no longer fails to set a DNS search domain if IPv6 is disabled Previously, the nm_connection_verify() function of the libnm library did not ignore the DNS search domain if the IPv6 protocol was disabled. As a consequence, when you used the Networking RHEL system role and set dns_search together with ipv6_disabled: true , the system role failed with the following error: With this update, the nm_connection_verify() function ignores the DNS search domain if IPv6 is disabled. As a consequence, you can use dns_search as expected, even if IPv6 is disabled. ( BZ#2004899 ) Postfix role README no longer uses plain role name Previously, the examples provided in the /usr/share/ansible/roles/rhel-system-roles.postfix/README.md used the plain version of the role name, postfix , instead of using rhel-system-roles.postfix . Consequently, users would consult the documentation and incorrectly use the plain role name instead of Full Qualified Role Name (FQRN). This update fixes the issue, and the documentation contains examples with the FQRN, rhel-system-roles.postfix , enabling users to correctly write playbooks. ( BZ#1958964 ) Postfix RHEL system role README.md no longer missing variables under the "Role Variables" section Previously, the Postfix RHEL system role variables, such as postfix_check , postfix_backup , postfix_backup_multiple were not available under the "Role Variables" section. Consequently, users were not able to consult the Postfix role documentation. This update adds role variable documentation to the Postfix README section. The role variables are documented and available for users in the doc/usr/share/doc/rhel-system-roles/postfix/README.md documentation provided by rhel-system-roles . ( BZ#1978734 ) Role tasks no longer change when running the same output Previously, several of the role tasks would report as CHANGED when running the same input once again, even if there were no changes. Consequently, the role was not acting idempotent. To fix the issue, perform the following actions: Check if configuration variables change before applying them. You can use the option --check for this verification. Do not add a Last Modified: USDdate header to the configuration file. As a result, the role tasks are idempotent. ( BZ#1978760 ) The logging_purge_confs option correctly deletes unnecessary configuration files With the logging_purge_confs option set to true , it should delete unnecessary logging configuration files. Previously, however, unnecessary configuration files were not deleted from the configuration directory even if logging_purge_confs was set to true . This issue is now fixed and the option has been redefined as follows: if logging_purge_confs is set to true , Rsyslog removes files from the rsyslog.d directory which do not belong to any rpm packages. This includes configuration files generated by runs of the Logging role. The default value of logging_purge_confs is false . ( BZ#2039106 ) A playbook using the Metrics role completes successfully on multiple runs even if the Grafana admin password is changed Previously, changes to the Grafana admin user password after running the Metrics role with the metrics_graph_service: yes boolean caused failure on subsequent runs of the Metrics role. This led to failures of playbooks using the Metrics role, and the affected systems were only partially set up for performance analysis. Now, the Metrics role uses the Grafana deployment API when it is available and no longer requires knowledge of username or password to perform the necessary configuration actions. As a result, a playbook using the Metrics role completes successfully on multiple runs even if the administrator changes the Grafana admin password. ( BZ#2041632 ) Configuration by the Metrics role now follows symbolic links correctly When the mssql pcp package is installed, the mssql.conf file is located in /etc/pcp/mssql/ and is targeted by the symbolic link /var/lib/pcp/pmdas/mssql/mssql.conf . Previously, however, the Metrics role overwrote the symbolic link instead of following it and configuring mssql.conf . Consequently, running the Metrics role changed the symbolic link to a regular file and the configuration therefore only affected the /var/lib/pcp/pmdas/mssql/mssql.conf file. This resulted in a failed symbolic link, and the main configuration file /etc/pcp/mssql/mssql.conf was not affected by the configuration. The issue is now fixed and the follow: yes option to follow the symbolic link has been added to the Metrics role. As a result, the Metrics role preserves the symbolic links and correctly configures the main configuration file. ( BZ#2058777 ) The timesync role no longer fails to find the requested service ptp4l Previously, on some versions of RHEL, the Ansible service_facts module, reported service facts incorrectly. Consequently, the timesync role reported an error attempting to stop the ptp4l service. With this fix, the Ansible service_facts module checks the return value of the tasks to stop timesync services. If the returned value is failed , but the error message is Could not find the requested service NAME: , then the module assumes success. As a result, the timesync role now runs without errors like Could not find the requested service ptp4l . (BZ#2058645) The kernel_settings configobj is available on managed hosts Previously, the kernel_settings role did not install the python3-configobj package on managed hosts. As a consequence, the role returned an error stating that the configobj Python module could not be found. With this fix, the role ensures that the python3-configobj package is present on managed hosts and the kernel_settings role works as expected. ( BZ#2058756 ) The Terminal Session Recording role tlog-rec-session is now correctly overlaid by SSSD Previously, the Terminal Session Recording RHEL system role relied on the System Security Services Daemon (SSSD) files provider and on enabled authselect option with-files-domain to set up correct passwd entries in the nsswitch.conf file. In RHEL 9.0, SSSD did not implicitly enable the files provider by default, and consequently the tlog-rec-session shell overlay by SSSD did not work. With this fix, the Terminal Session Recording role now updates the nsswitch.conf to ensure tlog-rec-session is correctly overlaid by SSSD. ( BZ#2071804 ) The SSHD system role can manage systems in FIPS mode Previously, the SSHD system role could not create the not allowed HostKey type when called. As a consequence, the SSHD system role could not manage RHEL 8 and older systems in Federal Information Processing Standard (FIPS) mode. With this update, the SSHD system role detects FIPS mode and adjusts the default HostKey list correctly. As a result, the system role can manage RHEL systems in FIPS mode with the default HostKey configuration. ( BZ#2029634 ) The SSHD system role uses the correct template file Previously, the SSHD system role used a wrong template file. As a consequence, the generated sshd_config file did not contain the ansible_managed comment. With this update, the system role uses the correct template file and sshd_config contains the correct ansible_managed comment. ( BZ#2044408 ) The Kdump RHEL system role is be able to reboot, or indicate that a reboot is required Previously, the Kdump RHEL system role ignored managed nodes without any reserved memory for crash kernel. Consequently, the role finished with the "Success" status, even if it did not configure the system properly. With this update of RHEL 9, the problem has been fixed. In cases when managed nodes do not have any memory reserved for the crash kernel, the Kdump RHEL system role fails and suggests that users set the kdump_reboot_ok variable to true to properly configure the kdump service on managed nodes. ( BZ#2029602 ) The nm provider in the Networking system role now correctly manages bridges Previously, if you used the initscripts provider, the Networking system role created an ifcfg file which configured NetworkManager to mark bridge interfaces as unmanaged. Also, NetworkManager failed to detect followup initscript actions. For example, the down and absent actions of initscript provider will not change the NetworkManager's understanding on unmanaged state of this interface if not reloading the connection after the down and absent actions. With this fix, the Networking system role uses the NM.Client.reload_connections_async() function to reload NetworkManager on managed hosts with NetworkManager 1.18. As a result, NetworkManager manages the bridge interface when switching the provider from initscript to nm . ( BZ#2038957 ) Fixed a typo to support active-backup for the correct bonding mode Previously, there was a typo, active_backup , in supporting the InfiniBand port while specifying active-backup bonding mode. Due to this typo, the connection failed to support the correct bonding mode for the InfiniBand bonding port. This update fixes the typo by changing bonding mode to active-backup . The connection now successfully supports the InfiniBand bonding port. ( BZ#2064391 ) The Logging system role no longer calls tasks multiple times Previously, the Logging role was calling tasks multiple times that should have been called only once. As a consequence, the extra task calls slowed down the execution of the role. With this fix, the Logging role was changed to call the tasks only once, improving the Logging role performance. ( BZ#2004303 ) RHEL system roles now handle multi-line ansible_managed comments in generated files Previously, some of the RHEL system roles were using # {{ ansible_managed }} to generate some of the files. As a consequence, if a customer had a custom multi-line ansible_managed setting, the files would be generated incorrectly. With this fix, all of the system roles use the equivalent of {{ ansible_managed | comment }} when generating files so that the ansible_managed string is always properly commented, including multi-line ansible_managed values. Consequently, generated files have the correct multi-line ansible_managed value. ( BZ#2006230 ) The Firewall system role now reloads the firewall immediately when target changes Previously, the Firewall system role was not reloading the firewall when the target parameter has been changed. With this fix, the Firewall role reloads the firewall when the target changes, and as a result, the target change is immediate and available for subsequent operations. ( BZ#2057164 ) The group option in the Certificate system role no longer keeps certificates inaccessible to the group Previously, when setting the group for a certificate, the mode was not set to allow group read permission. As a consequence, group members were unable to read certificates issued by the Certificate role. With this fix, the group setting now ensures that the file mode includes group read permission. As a result, the certificates issued by the Certificate role for groups are accessible by the group members. ( BZ#2021025 ) The Logging role no longer misses quotes for the immark module interval value Previously, the interval field value for the immark module was not properly quoted, because the immark module was not properly configured. This fix ensures that the interval value is properly quoted. Now, the immark module works as expected. ( BZ#2021676 ) The /etc/tuned/kernel_settings/tuned.conf file has a proper ansible_managed header Previously, the kernel_settings RHEL system role had a hard-coded value for the ansible_managed header in the /etc/tuned/kernel_settings/tuned.conf file. Consequently, users could not provide their custom ansible_managed header. In this update, the problem has been fixed so that kernel_settings updates the header of /etc/tuned/kernel_settings/tuned.conf with user's ansible_managed setting. As a result, /etc/tuned/kernel_settings/tuned.conf has a proper ansible_managed header. ( BZ#2047506 ) The VPN system role filter plugin vpn_ipaddr now converts to FQCN (Fully Qualified Collection Name) Previously, the conversion from the legacy role format to the collection format was not converting the filter plugin vpn_ipaddr to FQCN (Fully Qualified Collection Name) redhat.rhel_system_roles.vpn_ipaddr . As a consequence, the VPN role could not find the plugin by the short name and reported an error. With this fix, the conversion script has been changed so that the filter is converted to FQCN format in the collection. And now the VPN role runs without issuing the error. (BZ#2050341) Job for kdump.service no longer fails Previously, the Kdump role code for configuring the kernel crash size was not updated for RHEL9, which requires the use of kdumpctl reset-crashkernel . As a consequence, the kdump.service could not start and issued an error. With this update, the kdump.service role uses kdumpctl reset-crashkernel to configure the crash kernel size. Now, kdump.service role successfully starts the kdump service and the kernel crash size is configured correctly. (BZ#2050419) 5.13. Virtualization Hot-unplugging a mounted virtual disk no longer causes the guest kernel to crash on IBM Z Previously, when detaching a mounted disk from a running virtual machine (VM) on IBM Z hardware, the VM kernel crashed under the following conditions: The disk was attached with target bus type scsi and mounted inside the guest. After hot-unplugging the disk device, the corresponding SCSI controller was hot-unplugged as well. With this update, the underlying code has been fixed and the described crash no longer occurs. (BZ#1997541) 5.14. Containers UBI 9-Beta containers can run on RHEL 7 and 8 hosts Previously, the UBI 9-Beta container images had an incorrect seccomp profile set in the containers-common package. As a consequence, containers were not able to deal with certain system calls causing a failure. With this update, the problem has been fixed. ( BZ#2019901 )
|
[
"winrm set winrm/config/service/auth '@{Basic=\"true\"}' winrm set winrm/config/service '@{AllowUnencrypted=\"true\"}'",
"Couldn't init MD4 algorithm. Enable OpenSSL legacy provider.",
"nm-connection-error-quark: ipv6.dns-search: this property is not allowed for 'method=ignore' (7)"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/9.0_release_notes/bug_fixes
|
Chapter 12. Identity Brokering
|
Chapter 12. Identity Brokering An Identity Broker is an intermediary service that connects multiple service providers with different identity providers. As an intermediary service, the identity broker is responsible for creating a trust relationship with an external identity provider in order to use its identities to access internal services exposed by service providers. From a user perspective, an identity broker provides a user-centric and centralized way to manage identities across different security domains or realms. An existing account can be linked with one or more identities from different identity providers or even created based on the identity information obtained from them. An identity provider is usually based on a specific protocol that is used to authenticate and communicate authentication and authorization information to their users. It can be a social provider such as Facebook, Google or Twitter. It can be a business partner whose users need to access your services. Or it can be a cloud-based identity service that you want to integrate with. Usually, identity providers are based on the following protocols: SAML v2.0 OpenID Connect v1.0 OAuth v2.0 In the sections we'll see how to configure and use Red Hat Single Sign-On as an identity broker, covering some important aspects such as: Social Authentication OpenID Connect v1.0 Brokering SAML v2.0 Brokering Identity Federation 12.1. Brokering Overview When using Red Hat Single Sign-On as an identity broker, users are not forced to provide their credentials in order to authenticate in a specific realm. Instead, they are presented with a list of identity providers from which they can authenticate. You can also configure a default identity provider. In this case the user will not be given a choice, but will instead be redirected directly to the default provider. The following diagram demonstrates the steps involved when using Red Hat Single Sign-On to broker an external identity provider: Identity Broker Flow User is not authenticated and requests a protected resource in a client application. The client applications redirects the user to Red Hat Single Sign-On to authenticate. At this point the user is presented with the login page where there is a list of identity providers configured in a realm. User selects one of the identity providers by clicking on its respective button or link. Red Hat Single Sign-On issues an authentication request to the target identity provider asking for authentication and the user is redirected to the login page of the identity provider. The connection properties and other configuration options for the identity provider were previously set by the administrator in the Admin Console. User provides his credentials or consent in order to authenticate with the identity provider. Upon a successful authentication by the identity provider, the user is redirected back to Red Hat Single Sign-On with an authentication response. Usually this response contains a security token that will be used by Red Hat Single Sign-On to trust the authentication performed by the identity provider and retrieve information about the user. Now Red Hat Single Sign-On is going to check if the response from the identity provider is valid. If valid, it will import and create a new user or just skip that if the user already exists. If it is a new user, Red Hat Single Sign-On may ask the identity provider for information about the user if that info doesn't already exist in the token. This is what we call identity federation . If the user already exists Red Hat Single Sign-On may ask him to link the identity returned from the identity provider with the existing account. We call this process account linking . What exactly is done is configurable and can be specified by setup of First Login Flow . At the end of this step, Red Hat Single Sign-On authenticates the user and issues its own token in order to access the requested resource in the service provider. Once the user is locally authenticated, Red Hat Single Sign-On redirects the user to the service provider by sending the token previously issued during the local authentication. The service provider receives the token from Red Hat Single Sign-On and allows access to the protected resource. There are some variations of this flow that we will talk about later. For instance, instead of presenting a list of identity providers, the client application can request a specific one. Or you can tell Red Hat Single Sign-On to force the user to provide additional information before federating his identity. Note Different protocols may require different authentication flows. At this moment, all the identity providers supported by Red Hat Single Sign-On use a flow just like described above. However, regardless of the protocol in use, user experience should be pretty much the same. As you may notice, at the end of the authentication process Red Hat Single Sign-On will always issue its own token to client applications. What this means is that client applications are completely decoupled from external identity providers. They don't need to know which protocol (eg.: SAML, OpenID Connect, OAuth, etc) was used or how the user's identity was validated. They only need to know about Red Hat Single Sign-On. 12.2. Default Identity Provider It is possible to automatically redirect to a identity provider instead of displaying the login form. To enable this go to the Authentication page in the administration console and select the Browser flow. Then click on config for the Identity Provider Redirector authenticator. Set Default Identity Provider to the alias of the identity provider you want to automatically redirect users to. If the configured default identity provider is not found the login form will be displayed instead. This authenticator is also responsible for dealing with the kc_idp_hint query parameter. See client suggested identity provider section for more details. 12.3. General Configuration The identity broker configuration is all based on identity providers. Identity providers are created for each realm and by default they are enabled for every single application. That means that users from a realm can use any of the registered identity providers when signing in to an application. In order to create an identity provider click the Identity Providers left menu item. Identity Providers In the drop down list box, choose the identity provider you want to add. This will bring you to the configuration page for that identity provider type. Add Identity Provider Above is an example of configuring a Google social login provider. Once you configure an IDP, it will appear on the Red Hat Single Sign-On login page as an option. IDP login page Social Social providers allow you to enable social authentication in your realm. Red Hat Single Sign-On makes it easy to let users log in to your application using an existing account with a social network. Currently supported providers include: Twitter, Facebook, Google, LinkedIn, Instagram, Microsoft, PayPal, Openshift v3, GitHub, GitLab, Bitbucket, and Stack Overflow. Protocol-based Protocol-based providers are those that rely on a specific protocol in order to authenticate and authorize users. They allow you to connect to any identity provider compliant with a specific protocol. Red Hat Single Sign-On provides support for SAML v2.0 and OpenID Connect v1.0 protocols. It makes it easy to configure and broker any identity provider based on these open standards. Although each type of identity provider has its own configuration options, all of them share some very common configuration. Regardless of which identity provider you are creating, you'll see the following configuration options available: Table 12.1. Common Configuration Configuration Description Alias The alias is a unique identifier for an identity provider. It is used to reference an identity provider internally. Some protocols such as OpenID Connect require a redirect URI or callback url in order to communicate with an identity provider. In this case, the alias is used to build the redirect URI. Every single identity provider must have an alias. Examples are facebook , google , idp.acme.com , etc. Enabled Turn the provider on/off. Hide on Login Page When this switch is on, this provider will not be shown as a login option on the login page. Clients can still request to use this provider by using the 'kc_idp_hint' parameter in the URL they use to request a login. Account Linking Only When this switch is on, this provider cannot be used to login users and will not be shown as an option on the login page. Existing accounts can still be linked with this provider though. Store Tokens Whether or not to store the token received from the identity provider. Stored Tokens Readable Whether or not users are allowed to retrieve the stored identity provider token. This also applies to the broker client-level role read token . Trust Email If the identity provider supplies an email address this email address will be trusted. If the realm required email validation, users that log in from this IDP will not have to go through the email verification process. GUI Order The order number that sorts how the available IDPs are listed on the login page. First Login Flow This is the authentication flow that will be triggered for users that log into Red Hat Single Sign-On through this IDP for the first time ever. Post Login Flow Authentication flow that is triggered after the user finishes logging in with the external identity provider. 12.4. Social Identity Providers For Internet facing applications, it is quite burdensome for users to have to register at your site to obtain access. It requires them to remember yet another username and password combination. Social identity providers allow you to delegate authentication to a semi-trusted and respected entity where the user probably already has an account. Red Hat Single Sign-On provides built-in support for the most common social networks out there, such as Google, Facebook, Twitter, GitHub, LinkedIn, Microsoft and Stack Overflow. 12.4.1. Bitbucket There are a number of steps you have to complete to be able to enable login with Bitbucket. First, open the Identity Providers left menu item and select Bitbucket from the Add provider drop down list. This will bring you to the Add identity provider page. Add Identity Provider Before you can click Save , you must obtain a Client ID and Client Secret from Bitbucket. Note You will use the Redirect URI from this page in a later step, which you will provide to Bitbucket when you register Red Hat Single Sign-On as a client there. Add a New App To enable login with Bitbucket you must first register an application project in OAuth on Bitbucket Cloud . Note Bitbucket often changes the look and feel of application registration, so what you see on the Bitbucket site may differ. If in doubt, see the Bitbucket documentation. Click the Add consumer button. Register App Copy the Redirect URI from the Red Hat Single Sign-On Add Identity Provider page and enter it into the Callback URL field on the Bitbucket Add OAuth Consumer page. On the same page, mark the Email and Read boxes under Account to allow your application to read user email. Bitbucket App Page When you are done registering, click Save . This will open the application management page in Bitbucket. Find the client ID and secret from this page so you can enter them into the Red Hat Single Sign-On Add identity provider page. Click Save . 12.4.2. Facebook There are a number of steps you have to complete to be able to enable login with Facebook. First, go to the Identity Providers left menu item and select Facebook from the Add provider drop down list. This will bring you to the Add identity provider page. Add Identity Provider You can't click save yet, as you'll need to obtain a Client ID and Client Secret from Facebook. One piece of data you'll need from this page is the Redirect URI . You'll have to provide that to Facebook when you register Red Hat Single Sign-On as a client there, so copy this URI to your clipboard. To enable login with Facebook you first have to create a project and a client in the Facebook Developer Console . Note Facebook often changes the look and feel of the Facebook Developer Console, so these directions might not always be up to date and the configuration steps might be slightly different. Once you've logged into the console there is a pull down menu in the top right corner of the screen that says My Apps . Select the Add a New App menu item. Add a New App Select the Website icon. Click the Skip and Create App ID button. Create a New App ID The email address and app category are required fields. Once you're done with that, you will be brought to the dashboard for the application. Click the Settings left menu item. Create a New App ID Click on the + Add Platform button at the end of this page and select the Website icon. Copy and paste the Redirect URI from the Red Hat Single Sign-On Add identity provider page into the Site URL of the Facebook Website settings block. Specify Website After this it is necessary to make the Facebook app public. Click App Review left menu item and switch button to "Yes". You will need also to obtain the App ID and App Secret from this page so you can enter them into the Red Hat Single Sign-On Add identity provider page. To obtain this click on the Dashboard left menu item and click on Show under App Secret . Go back to Red Hat Single Sign-On and specify those items and finally save your Facebook Identity Provider. One config option to note on the Add identity provider page for Facebook is the Default Scopes field. This field allows you to manually specify the scopes that users must authorize when authenticating with this provider. For a complete list of scopes, please take a look at https://developers.facebook.com/docs/graph-api . By default, Red Hat Single Sign-On uses the following scopes: email . 12.4.3. GitHub There are a number of steps you have to complete to be able to enable login with GitHub. First, go to the Identity Providers left menu item and select GitHub from the Add provider drop down list. This will bring you to the Add identity provider page. Add Identity Provider You can't click save yet, as you'll need to obtain a Client ID and Client Secret from GitHub. One piece of data you'll need from this page is the Redirect URI . You'll have to provide that to GitHub when you register Red Hat Single Sign-On as a client there, so copy this URI to your clipboard. To enable login with GitHub you first have to register an application project in GitHub Developer applications . Note GitHub often changes the look and feel of application registration, so these directions might not always be up to date and the configuration steps might be slightly different. Add a New App Click the Register a new application button. Register App You'll have to copy the Redirect URI from the Red Hat Single Sign-On Add Identity Provider page and enter it into the Authorization callback URL field on the GitHub Register a new OAuth application page. Once you've completed this page you will be brought to the application's management page. GitHub App Page You will need to obtain the client ID and secret from this page so you can enter them into the Red Hat Single Sign-On Add identity provider page. Go back to Red Hat Single Sign-On and specify those items. 12.4.4. GitLab There are a number of steps you have to complete to be able to enable login with GitLab. First, go to the Identity Providers left menu item and select GitLab from the Add provider drop down list. This will bring you to the Add identity provider page. Add Identity Provider Before you can click Save , you must obtain a Client ID and Client Secret from GitLab. Note You will use the Redirect URI from this page in a later step, which you will provide to GitLab when you register Red Hat Single Sign-On as a client there. To enable login with GitLab you first have to register an application in GitLab as OAuth2 authentication service provider . Note GitLab often changes the look and feel of application registration, so what you see on the GitLab site may differ. If in doubt, see the GitLab documentation. Add a New App Copy the Redirect URI from the Red Hat Single Sign-On Add Identity Provider page and enter it into the Redirect URI field on the GitLab Add new application page. GitLab App Page When you are done registering, click Save application . This will open the application management page in GitLab. Find the client ID and secret from this page so you can enter them into the Red Hat Single Sign-On Add identity provider page. To finish, return to Red Hat Single Sign-On and enter them. Click Save . 12.4.5. Google There are a number of steps you have to complete to be able to enable login with Google. First, go to the Identity Providers left menu item and select Google from the Add provider drop down list. This will bring you to the Add identity provider page. Add Identity Provider You can't click save yet, as you'll need to obtain a Client ID and Client Secret from Google. One piece of data you'll need from this page is the Redirect URI . You'll have to provide that to Google when you register Red Hat Single Sign-On as a client there, so copy this URI to your clipboard. To enable login with Google you first have to create a project and a client in the Google Developer Console . Then you need to copy the client ID and secret into the Red Hat Single Sign-On Admin Console. Note Google often changes the look and feel of the Google Developer Console, so these directions might not always be up to date and the configuration steps might be slightly different. Let's see first how to create a project with Google. Log in to the Google Developer Console . Google Developer Console Click the Create Project button. Use any value for Project name and Project ID you want, then click the Create button. Wait for the project to be created (this may take a while). Once created you will be brought to the project's dashboard. Dashboard Then navigate to the APIs & Services section in the Google Developer Console. On that screen, navigate to Credentials administration. When users log into Google from Red Hat Single Sign-On they will see a consent screen from Google which will ask the user if Red Hat Single Sign-On is allowed to view information about their user profile. Thus Google requires some basic information about the product before creating any secrets for it. For a new project, you have first to configure OAuth consent screen . For the very basic setup, filling in the Application name is sufficient. You can also set additional details like scopes for Google APIs in this page. Fill in OAuth consent screen details The step is to create OAuth client ID and client secret. Back in Credentials administration, navigate to Credentials tab and select OAuth client ID under the Create credentials button. Create credentials You will then be brought to the Create OAuth client ID page. Select Web application as the application type. Specify the name you want for your client. You'll also need to copy and paste the Redirect URI from the Red Hat Single Sign-On Add Identity Provider page into the Authorized redirect URIs field. After you do this, click the Create button. Create OAuth client ID After you click Create you will be brought to the Credentials page. Click on your new OAuth 2.0 Client ID to view the settings of your new Google Client. Google Client Credentials You will need to obtain the client ID and secret from this page so you can enter them into the Red Hat Single Sign-On Add identity provider page. Go back to Red Hat Single Sign-On and specify those items. One config option to note on the Add identity provider page for Google is the Default Scopes field. This field allows you to manually specify the scopes that users must authorize when authenticating with this provider. For a complete list of scopes, please take a look at https://developers.google.com/oauthplayground/ . By default, Red Hat Single Sign-On uses the following scopes: openid profile email . If your organization uses the G Suite and you want to restrict access to only members of your organization, you must enter the domain that is used for the G Suite into the Hosted Domain field to enable it. 12.4.6. LinkedIn There are a number of steps you have to complete to be able to enable login with LinkedIn. First, go to the Identity Providers left menu item and select LinkedIn from the Add provider drop down list. This will bring you to the Add identity provider page. Add Identity Provider You can't click save yet, as you'll need to obtain a Client ID and Client Secret from LinkedIn. One piece of data you'll need from this page is the Redirect URI . You'll have to provide that to LinkedIn when you register Red Hat Single Sign-On as a client there, so copy this URI to your clipboard. To enable login with LinkedIn you first have to create an application in LinkedIn Developer Network . Note LinkedIn may change the look and feel of application registration, so these directions may not always be up to date. Developer Network Click on the Create Application button. This will bring you to the Create a New Application Page. Create App Fill in the form with the appropriate values, then click the Submit button. This will bring you to the new application's settings page. App Settings Select r_basicprofile and r_emailaddress in the Default Application Permissions section. You'll have to copy the Redirect URI from the Red Hat Single Sign-On Add Identity Provider page and enter it into the OAuth 2.0 Authorized Redirect URLs field on the LinkedIn app settings page. Don't forget to click the Update button after you do this! You will then need to obtain the client ID and secret from this page so you can enter them into the Red Hat Single Sign-On Add identity provider page. Go back to Red Hat Single Sign-On and specify those items. 12.4.7. Microsoft There are a number of steps you have to complete to be able to enable login with Microsoft. First, go to the Identity Providers left menu item and select Microsoft from the Add provider drop down list. This will bring you to the Add identity provider page. Add Identity Provider You can't click save yet, as you'll need to obtain a Client ID and Client Secret from Microsoft. One piece of data you'll need from this page is the Redirect URI . You'll have to provide that to Microsoft when you register Red Hat Single Sign-On as a client there, so copy this URI to your clipboard. To enable login with Microsoft account you first have to register an OAuth application at Microsoft. Go to the Microsoft Application Registration url. Note Microsoft often changes the look and feel of application registration, so these directions might not always be up to date and the configuration steps might be slightly different. Register Application Enter in the application name and click Create application . This will bring you to the application settings page of your new application. Settings You'll have to copy the Redirect URI from the Red Hat Single Sign-On Add Identity Provider page and add it to the Redirect URIs field on the Microsoft application page. Be sure to click the Add Url button and Save your changes. Finally, you will need to obtain the Application ID and secret from this page so you can enter them back on the Red Hat Single Sign-On Add identity provider page. Go back to Red Hat Single Sign-On and specify those items. Warning From November 2018 onwards, Microsoft is removing support for the Live SDK API in favor of the new Microsoft Graph API. The Red Hat Single Sign-On Microsoft identity provider has been updated to use the new endpoints so make sure to upgrade to Red Hat Single Sign-On version 7.2.5 or later in order to use this provider. Furthermore, client applications registered with Microsoft under "Live SDK applications" will need to be re-registered in the Microsoft Application Registration portal to obtain an application id that is compatible with the Microsoft Graph API. 12.4.8. OpenShift 3 Note OpenShift Online is currently in the developer preview mode. This documentation has been based on on-premise installations and local minishift development environment. There are a just a few steps you have to complete to be able to enable login with OpenShift. First, go to the Identity Providers left menu item and select OpenShift from the Add provider drop down list. This will bring you to the Add identity provider page. Add Identity Provider Registering OAuth client You can register your client using oc command line tool. USD oc create -f <(echo ' kind: OAuthClient apiVersion: v1 metadata: name: kc-client 1 secret: "..." 2 redirectURIs: - "http://www.example.com/" 3 grantMethod: prompt 4 ') 1 The name of your OAuth client. Passed as client_id request parameter when making requests to <openshift_master> /oauth/authorize and <openshift_master> /oauth/token . 2 secret is used as the client_secret request parameter. 3 The redirect_uri parameter specified in requests to <openshift_master> /oauth/authorize and <openshift_master> /oauth/token must be equal to (or prefixed by) one of the URIs in redirectURIs . 4 The grantMethod is used to determine what action to take when this client requests tokens and has not yet been granted access by the user. Use client ID and secret defined by oc create command to enter them back on the Red Hat Single Sign-On Add identity provider page. Go back to Red Hat Single Sign-On and specify those items. Please refer to official OpenShift documentation for more detailed guides. 12.4.9. OpenShift 4 Note Prior to configuring OpenShift 4 Identity Provider, please locate the correct OpenShift 4 API URL up. In some scenarios, that URL might be hidden from users. The easiest way to obtain it is to invoke the following command (this might require installing jq command separately) curl -s -k -H "Authorization: Bearer USD(oc whoami -t)" https://<openshift-user-facing-api-url>/apis/config.openshift.io/v1/infrastructures/cluster | jq ".status.apiServerURL" . In most cases, the address will be protected by HTTPS. Therefore, it is essential to configure X509_CA_BUNDLE in the container and set it to /var/run/secrets/kubernetes.io/serviceaccount/ca.crt . Otherwise, Red Hat Single Sign-On won't be able to communicate with the API Server. There are a just a few steps you have to complete to be able to enable login with OpenShift 4. First, go to the Identity Providers left menu item and select OpenShift v4 from the Add provider drop down list. This will bring you to the Add identity provider page. Add Identity Provider Registering OAuth client You can register your client using oc command line tool. USD oc create -f <(echo ' kind: OAuthClient apiVersion: v1 metadata: name: keycloak-broker 1 secret: "..." 2 redirectURIs: - "<copy pasted Redirect URI from OpenShift 4 Identity Providers page>" 3 grantMethod: prompt 4 ') 1 The name of your OAuth client. Passed as client_id request parameter when making requests to <openshift_master> /oauth/authorize and <openshift_master> /oauth/token . The name parameter needs to be the same in OAuthClient object as well as in Red Hat Single Sign-On configuration. 2 secret is used as the client_secret request parameter. 3 The redirect_uri parameter specified in requests to <openshift_master> /oauth/authorize and <openshift_master> /oauth/token must be equal to (or prefixed by) one of the URIs in redirectURIs . The easiest way to configure it correctly is to copy-paste it from Red Hat Single Sign-On OpenShift 4 Identity Provider configuration page ( Redirect URI field). 4 The grantMethod is used to determine what action to take when this client requests tokens and has not yet been granted access by the user. Use the client ID and secret defined by oc create command to enter them back on the Red Hat Single Sign-On Add identity provider page. Go back to Red Hat Single Sign-On and specify those items. Tip The OpenShift API server returns The client is not authorized to request a token using this method whenever OAuthClient name , secret or redirectURIs is incorrect. Make sure you copy-pasted them into Red Hat Single Sign-On OpenShift 4 Identity Provider page correctly. Please refer to official OpenShift documentation for more detailed guides. 12.4.10. PayPal There are a number of steps you have to complete to be able to enable login with PayPal. First, go to the Identity Providers left menu item and select PayPal from the Add provider drop down list. This will bring you to the Add identity provider page. Add Identity Provider You can't click save yet, as you'll need to obtain a Client ID and Client Secret from PayPal. One piece of data you'll need from this page is the Redirect URI . You'll have to provide that to PayPal when you register Red Hat Single Sign-On as a client there, so copy this URI to your clipboard. To enable login with PayPal you first have to register an application project in PayPal Developer applications . Add a New App Click the Create App button. Register App You will now be brought to the app settings page. Do the following changes Choose to configure either Sandbox or Live (choose Live if you haven't enabled the Target Sandbox switch on the Add identity provider page) Copy Client ID and Secret so you can paste them into the Red Hat Single Sign-On Add identity provider page. Scroll down to App Settings Copy the Redirect URI from the Red Hat Single Sign-On Add Identity Provider page and enter it into the Return URL field. Check the Log In with PayPal checkbox. Check the Full name checkbox under the personal information section. Check the Email address checkbox under the address information section. Add both a privacy and a user agreement URL pointing to the respective pages on your domain. 12.4.11. Stack Overflow There are a number of steps you have to complete to be able to enable login with Stack Overflow. First, go to the Identity Providers left menu item and select Stack Overflow from the Add provider drop down list. This will bring you to the Add identity provider page. Add Identity Provider To enable login with Stack Overflow you first have to register an OAuth application on StackApps . Go to registering your application on Stack Apps URL and login. Note Stack Overflow often changes the look and feel of application registration, so these directions might not always be up to date and the configuration steps might be slightly different. Register Application Enter in the application name and the OAuth Domain Name of your application and click Register your Application . Type in anything you want for the other items. Settings Finally, you will need to obtain the client ID, secret, and key from this page so you can enter them back on the Red Hat Single Sign-On Add identity provider page. Go back to Red Hat Single Sign-On and specify those items. 12.4.12. Twitter There are a number of steps you have to complete to be able to enable login with Twitter. First, go to the Identity Providers left menu item and select Twitter from the Add provider drop down list. This will bring you to the Add identity provider page. Add Identity Provider You can't click save yet, as you'll need to obtain a Client ID and Client Secret from Twitter. One piece of data you'll need from this page is the Redirect URI . You'll have to provide that to Twitter when you register Red Hat Single Sign-On as a client there, so copy this URI to your clipboard. To enable login with Twtter you first have to create an application in the Twitter Application Management . Register Application Click on the Create New App button. This will bring you to the Create an Application page. Register Application Enter in a Name and Description. The Website can be anything, but cannot have a localhost address. For the Callback URL you must copy the Redirect URI from the Red Hat Single Sign-On Add Identity Provider page. Warning You cannot use localhost in the Callback URL . Instead replace it with 127.0.0.1 if you are trying to test drive Twitter login on your laptop. After clicking save you will be brought to the Details page. App Details go to the Keys and Access Tokens tab. Keys and Access Tokens Finally, you will need to obtain the API Key and secret from this page and copy them back into the Client ID and Client Secret fields on the Red Hat Single Sign-On Add identity provider page. 12.4.13. Instagram There are a number of steps you have to complete to be able to enable login with Instagram. First, go to the Identity Providers left menu item and select Instagram from the Add provider drop down list. This will bring you to the Add identity provider page. Add Identity Provider You can't click save yet, as you'll need to obtain a Client ID and Client Secret from Instagram. One piece of data you'll need from this page is the Redirect URI . You'll have to provide that to Instagram when you register Red Hat Single Sign-On as a client there, so copy this URI to your clipboard. To enable login with Instagram you first have to create a project and a client. Instagram API is managed through the Facebook Developer Console . Note Facebook often changes the look and feel of the Facebook Developer Console, so these directions might not always be up to date and the configuration steps might be slightly different. Once you've logged into the console there is a menu in the top right corner of the screen that says My Apps . Select the Add a New App menu item. Add a New App Select For Everything Else . Create a New App ID Fill all required fields. Once you're done with that, you will be brought to the dashboard for the application. In the menu in the left navigation panel select Basic under Settings . Add Platform Select + Add Platform at the bottom and then click [Website] with the globe icon. Specify URL of your site. Add a Product Select Dashboard from the left menu and click Set Up in the Instagram box. In the left menu then select Basic Display under Instagram and click Create New App . Create a New Instagram App ID Specify Display Name . Setup the App Copy and paste the Redirect URI from the Red Hat Single Sign-On Add identity provider page into the Valid OAuth Redirect URIs of the Instagram Client OAuth Settings settings block. You can use this URL also for Deauthorize Callback URL and Data Deletion Request URL . Red Hat Single Sign-On currently doesn't support either of them, but the Facebook Developer Console requires both of them to be filled. You will need also to obtain the App ID and App Secret from this page so you can enter them into the Red Hat Single Sign-On Add identity provider page. To obtain this click on Show under App Secret . Go back to Red Hat Single Sign-On and specify those items and finally save your Instagram Identity Provider. After this it is necessary to make the Instagram app public. Click App Review left menu item and then Requests . After that follow the instructions on screen. 12.5. OpenID Connect v1.0 Identity Providers Red Hat Single Sign-On can broker identity providers based on the OpenID Connect protocol. These IDPs must support the Authorization Code Flow as defined by the specification in order to authenticate the user and authorize access. To begin configuring an OIDC provider, go to the Identity Providers left menu item and select OpenID Connect v1.0 from the Add provider drop down list. This will bring you to the Add identity provider page. Add Identity Provider The initial configuration options on this page are described in General IDP Configuration . You must define the OpenID Connect configuration options as well. They basically describe the OIDC IDP you are communicating with. Table 12.2. OpenID Connect Config Configuration Description Authorization URL Authorization URL endpoint required by the OIDC protocol. Token URL Token URL endpoint required by the OIDC protocol. Logout URL Logout URL endpoint defined in the OIDC protocol. This value is optional. Backchannel Logout Backchannel logout is a background, out-of-band, REST invocation to the IDP to logout the user. Some IDPs can only perform logout through browser redirects as they may only be able to identity sessions via a browser cookie. User Info URL User Info URL endpoint defined by the OIDC protocol. This is an endpoint from which user profile information can be downloaded. Client Authentication Switch to define the Client Authentication method to be used with the Authorization Code Flow. In the case of JWT signed with private key, the realm private key is used. In the other cases, a client secret has to be defined. For more details, see the Client Authentication specifications . Client ID This realm will act as an OIDC client to the external IDP. Your realm will need an OIDC client ID when using the Authorization Code Flow to interact with the external IDP. Client Secret This realm will need a client secret to use when using the Authorization Code Flow. The value of this field can refer a value from an external vault . Issuer Responses from the IDP may contain an issuer claim. This config value is optional. If specified, this claim will be validated against the value you provide. Default Scopes Space-separated list of OIDC scopes to send with the authentication request. The default is openid . Prompt Another optional switch. This is the prompt parameter defined by the OIDC specification. Through it you can force re-authentication and other options. See the specification for more details. Accepts prompt=none forward from client Specifies whether the IDP accepts forwarded authentication requests that contain the prompt=none query parameter or not. When a realm receives an auth request with prompt=none it checks if the user is currently authenticated and normally returns a login_required error if the user is not logged in. However, when a default IDP can be determined for the auth request (either via kc_idp_hint query param or by setting up a default IDP for the realm) we should be able to forward the auth request with prompt=none to the default IDP so that it checks if the user is currently authenticated there. Because not all IDPs support requests with prompt=none this switch is used to indicate if the default IDP supports the param before redirecting the auth request. It is important to note that if the user is not authenticated in the IDP, the client will still get a login_required error. Even if the user is currently authenticated in the IDP, the client might still get an interaction_required error if authentication or consent pages requiring user interaction would be otherwise displayed. This includes required actions (e.g. change password), consent screens and any screens set to be displayed by the first broker login flow or post broker login flow. Validate Signatures Another optional switch. This is to specify if Red Hat Single Sign-On will verify the signatures on the external ID Token signed by this identity provider. If this is on, the Red Hat Single Sign-On will need to know the public key of the external OIDC identity provider. See below for how to set it up. WARNING: For the performance purposes, Red Hat Single Sign-On caches the public key of the external OIDC identity provider. If you think that private key of your identity provider was compromised, it is obviously good to update your keys, but it's also good to clear the keys cache. See Clearing the cache section for more details. Use JWKS URL Applicable if Validate Signatures is on. If the switch is on, then identity provider public keys will be downloaded from given JWKS URL. This allows great flexibility because new keys will be always re-downloaded when the identity provider generates new keypair. If the switch is off, then public key (or certificate) from the Red Hat Single Sign-On DB is used, so whenever the identity provider keypair changes, you will always need to import the new key to the Red Hat Single Sign-On DB as well. JWKS URL URL where the identity provider JWK keys are stored. See the JWK specification for more details. If you use an external Red Hat Single Sign-On as an identity provider, then you can use URL like http://broker-keycloak:8180/auth/realms/test/protocol/openid-connect/certs assuming your brokered Red Hat Single Sign-On is running on http://broker-keycloak:8180 and it's realm is test . Validating Public Key Applicable if Use JWKS URL is off. Here is the public key in PEM format that must be used to verify external IDP signatures. Validating Public Key Id Applicable if Use JWKS URL is off. This field specifies ID of the public key in PEM format. This config value is optional. As there is no standard way for computing key ID from key, various external identity providers might use different algorithm from Red Hat Single Sign-On. If the value of this field is not specified, the validating public key specified above is used for all requests regardless of key ID sent by external IDP. When set, value of this field serves as key ID used by Red Hat Single Sign-On for validating signatures from such providers and must match the key ID specified by the IDP. You can also import all this configuration data by providing a URL or file that points to OpenID Provider Metadata (see OIDC Discovery specification). If you are connecting to a Red Hat Single Sign-On external IDP, you can import the IDP settings from the url <root>/auth/realms/{realm-name}/.well-known/openid-configuration . This link is a JSON document describing metadata about the IDP. 12.6. SAML v2.0 Identity Providers Red Hat Single Sign-On can broker identity providers based on the SAML v2.0 protocol. To begin configuring an SAML v2.0 provider, go to the Identity Providers left menu item and select SAML v2.0 from the Add provider drop down list. This will bring you to the Add identity provider page. Add Identity Provider The initial configuration options on this page are described in General IDP Configuration . You must define the SAML configuration options as well. They basically describe the SAML IDP you are communicating with. Table 12.3. SAML Config Configuration Description Service Provider Entity ID This is a required field and specifies the SAML Entity ID that the remote Identity Provider will use to identify requests coming from this Service Provider. By default it is set to the realm base URL <root>/auth/realms/{realm-name} . Single Sign-On Service URL This is a required field and specifies the SAML endpoint to start the authentication process. If your SAML IDP publishes an IDP entity descriptor, the value of this field will be specified there. Single Logout Service URL This is an optional field that specifies the SAML logout endpoint. If your SAML IDP publishes an IDP entity descriptor, the value of this field will be specified there. Backchannel Logout Enable if your SAML IDP supports backchannel logout. NameID Policy Format Specifies the URI reference corresponding to a name identifier format. Defaults to urn:oasis:names:tc:SAML:2.0:nameid-persistent . Principal Type Specifies which part of the SAML assertion will be used to identify and track external user identities. Can be either Subject NameID or SAML attribute (either by name or by friendly name). Principal Attribute If Principal is set to either "Attribute [Name]" or "Attribute [Friendly Name]", this field will specify the name or the friendly name of the identifying attribute, respectively. HTTP-POST Binding Response When this realm responds to any SAML requests sent by the external IDP, which SAML binding should be used? If set to off , then the Redirect Binding will be used. HTTP-POST Binding for AuthnRequest When this realm requests authentication from the external SAML IDP, which SAML binding should be used? If set to off , then the Redirect Binding will be used. Want AuthnRequests Signed If true, it will use the realm's keypair to sign requests sent to the external SAML IDP. Signature Algorithm If Want AuthnRequests Signed is on, then you can also pick the signature algorithm to use. SAML Signature Key Name Signed SAML documents sent via POST binding contain identification of signing key in KeyName element. This by default contains Red Hat Single Sign-On key ID. However various external SAML IDPs might expect a different key name or no key name at all. This switch controls whether KeyName contains key ID (option KEY_ID ), subject from certificate corresponding to the realm key (option CERT_SUBJECT - expected for instance by Microsoft Active Directory Federation Services), or that the key name hint is completely omitted from the SAML message (option NONE ). Force Authentication Indicates that the user will be forced to enter their credentials at the external IDP even if they are already logged in. Validate Signature Whether or not the realm should expect that SAML requests and responses from the external IDP to be digitally signed. It is highly recommended you turn this on! Validating X509 Certificate The public certificate that will be used to validate the signatures of SAML requests and responses from the external IDP. You can also import all this configuration data by providing a URL or file that points to the SAML IDP entity descriptor of the external IDP. If you are connecting to a Red Hat Single Sign-On external IDP, you can import the IDP settings from the URL <root>/auth/realms/{realm-name}/protocol/saml/descriptor . This link is an XML document describing metadata about the IDP. You can also import all this configuration data by providing a URL or XML file that points to the entity descriptor of the external SAML IDP you want to connect to. 12.6.1. SP Descriptor Once you create a SAML provider, there is an EXPORT button that appears when viewing that provider. Clicking this button will export a SAML SP entity descriptor which you can use to import into the external SP. This metadata is also available publicly by going to the URL. 12.7. Client-suggested Identity Provider OIDC applications can bypass the Red Hat Single Sign-On login page by specifying a hint on which identity provider they want to use. This is done by setting the kc_idp_hint query parameter in the Authorization Code Flow authorization endpoint. Red Hat Single Sign-On OIDC client adapters also allow you to specify this query parameter when you access a secured resource at the application. For example: In this case, it is expected that your realm has an identity provider with an alias facebook . If this provider doesn't exist the login form will be displayed. If you are using keycloak.js adapter, you can also achieve the same behavior: var keycloak = new Keycloak('keycloak.json'); keycloak.createLoginUrl({ idpHint: 'facebook' }); The kc_idp_hint query parameter also allows the client to override the default identity provider if one is configured for the Identity Provider Redirector authenticator. The client can also disable the automatic redirecting by setting the kc_idp_hint query parameter to an empty value. 12.8. Mapping Claims and Assertions You can import the SAML and OpenID Connect metadata provided by the external IDP you are authenticating with into the environment of the realm. This allows you to extract user profile metadata and other information so that you can make it available to your applications. Each new user that logs into your realm via an external identity provider will have an entry for them created in the local Red Hat Single Sign-On database, based on the metadata from the SAML or OIDC assertions and claims. If you click on an identity provider listed in the Identity Providers page for your realm, you will be brought to the IDPs Settings tab. On this page there is also a Mappers tab. Click on that tab to start mapping your incoming IDP metadata. There is a Create button on this page. Clicking on this create button allows you to create a broker mapper. Broker mappers can import SAML attributes or OIDC ID/Access token claims into user attributes and user role mappings. Select a mapper from the Mapper Type list. Hover over the tooltip to see a description of what the mapper does. The tooltips also describe what configuration information you need to enter. Click Save and your new mapper will be added. For JSON based claims, you can use dot notation for nesting and square brackets to access array fields by index. For example 'contact.address[0].country'. To investigate the structure of user profile JSON data provided by social providers you can enable the DEBUG level logger org.keycloak.social.user_profile_dump . This is done in the server's app-server configuration file (domain.xml or standalone.xml). 12.9. Available User Session Data After a user logs in from the external IDP, there is some additional user session note data that Red Hat Single Sign-On stores that you can access. This data can be propagated to the client requesting a login via the token or SAML assertion being passed back to it by using an appropriate client mapper. identity_provider This is the IDP alias of the broker used to perform the login. identity_provider_identity This is the IDP username of the currently authenticated user. This is often the same as the Red Hat Single Sign-On username, but doesn't necessarily needs to be. For example Red Hat Single Sign-On user john can be linked to the Facebook user [email protected] , so in that case value of user session note will be [email protected] . You can use a Protocol Mapper of type User Session Note to propagate this information to your clients. 12.10. First Login Flow When a user logs in through identity brokering some aspects of the user are imported and linked within the realm's local database. When Red Hat Single Sign-On successfully authenticates users through an external identity provider there can be two situations: There is already a Red Hat Single Sign-On user account imported and linked with the authenticated identity provider account. In this case, Red Hat Single Sign-On will just authenticate as the existing user and redirect back to application. There is not yet an existing Red Hat Single Sign-On user account imported and linked for this external user. Usually you just want to register and import the new account into Red Hat Single Sign-On database, but what if there is an existing Red Hat Single Sign-On account with the same email? Automatically linking the existing local account to the external identity provider is a potential security hole as you can't always trust the information you get from the external identity provider. Different organizations have different requirements when dealing with some of the conflicts and situations listed above. For this, there is a First Login Flow option in the IDP settings which allows you to choose a workflow that will be used after a user logs in from an external IDP the first time. By default it points to first broker login flow, but you can configure and use your own flow and use different flows for different identity providers. The flow itself is configured in admin console under Authentication tab. When you choose First Broker Login flow, you will see what authenticators are used by default. You can re-configure the existing flow. (For example you can disable some authenticators, mark some of them as required , configure some authenticators, etc). 12.10.1. Default First Login Flow Let's describe the default behavior provided by First Broker Login flow. Review Profile This authenticator might display the profile info page, where the user can review their profile retrieved from an identity provider. The authenticator is configurable. You can set the Update Profile On First Login option. When On , users will be always presented with the profile page asking for additional information in order to federate their identities. When missing , users will be presented with the profile page only if some mandatory information (email, first name, last name) is not provided by the identity provider. If Off , the profile page won't be displayed, unless user clicks in later phase on Review profile info link (page displayed in later phase by Confirm Link Existing Account authenticator). Create User If Unique This authenticator checks if there is already an existing Red Hat Single Sign-On account with the same email or username like the account from the identity provider. If it's not, then the authenticator just creates a new local Red Hat Single Sign-On account and links it with the identity provider and the whole flow is finished. Otherwise it goes to the Handle Existing Account subflow. If you always want to ensure that there is no duplicated account, you can mark this authenticator as REQUIRED . In this case, the user will see the error page if there is an existing Red Hat Single Sign-On account and the user will need to link his identity provider account through Account management. Confirm Link Existing Account On the info page, the user will see that there is an existing Red Hat Single Sign-On account with the same email. They can review their profile again and use different email or username (flow is restarted and goes back to Review Profile authenticator). Or they can confirm that they want to link their identity provider account with their existing Red Hat Single Sign-On account. Disable this authenticator if you don't want users to see this confirmation page, but go straight to linking identity provider account by email verification or re-authentication. Verify Existing Account By Email This authenticator is ALTERNATIVE by default, so it's used only if the realm has SMTP setup configured. It will send email to the user, where they can confirm that they want to link the identity provider with their Red Hat Single Sign-On account. Disable this if you don't want to confirm linking by email, but instead you always want users to reauthenticate with their password (and alternatively OTP). Verify Existing Account By Re-authentication This authenticator is used if email authenticator is disabled or not available (SMTP not configured for realm). It will display a login screen where the user needs to authenticate to link their Red Hat Single Sign-On account with the Identity provider. User can also re-authenticate with some different identity provider, which is already linked to their Red Hat Single Sign-On account. You can also force users to use OTP. Otherwise it's optional and used only if OTP is already set for the user account. 12.10.2. Automatically Link Existing First Login Flow Warning The AutoLink authenticator would be dangerous in a generic environment where users can register themselves using arbitrary usernames/email addresses. Do not use this authenticator unless registration of users is carefully curated and usernames/email addresses are assigned, not requested. In order to configure a first login flow in which users are automatically linked without being prompted, create a new flow with the following two authenticators: Create User If Unique This authenticator ensures that unique users are handled. Set the authenticator requirement to "Alternative". Automatically Set Existing User Automatically sets an existing user to the authentication context without any verification. Set the authenticator requirement to "Alternative". Note The described setup uses two authenticators. This setup is the simplest one, but it is possible to use other authenticators according to your needs. For example, you can add the Review Profile authenticator to the beginning of the flow if you still want end users to confirm their profile information. You can also add authentication mechanisms to this flow, forcing a user to verify his credentials. This would require a more complex flow, for example setting the "Automatically Set Existing User" and "Password Form" as "Required" in an "Alternative" sub-flow. 12.10.3. Disabling Automatic User Creation The Default first login flow will look up a Keycloak account matching the external identity, and will then offer to link them; if there is no matching Keycloak account, it will automatically create one. This default behavior may be unsuitable for some setups, for example, when using read-only LDAP user store (which means all users are pre-created). In this case, automatic user creation should be turned off. To disable user creation: open the First Broker Login flow configuration; set Create User If Unique to DISABLED ; set Confirm Link Existing Account to DISABLED . This configuration also implies that Keycloak itself won't be able to determine which internal account would correspond to the external identity. Therefore, the Verify Existing Account By Re-authentication authenticator will ask the user to provide both username and password. 12.11. Retrieving External IDP Tokens Red Hat Single Sign-On allows you to store tokens and responses from the authentication process with the external IDP. For that, you can use the Store Token configuration option on the IDP's settings page. Application code can retrieve these tokens and responses to pull in extra user information, or to securely invoke requests on the external IDP. For example, an application might want to use the Google token to invoke on other Google services and REST APIs. To retrieve a token for a particular identity provider you need to send a request as follows: An application must have authenticated with Red Hat Single Sign-On and have received an access token. This access token will need to have the broker client-level role read-token set. This means that the user must have a role mapping for this role and the client application must have that role within its scope. In this case, given that you are accessing a protected service in Red Hat Single Sign-On, you need to send the access token issued by Red Hat Single Sign-On during the user authentication. In the broker configuration page you can automatically assign this role to newly imported users by turning on the Stored Tokens Readable switch. These external tokens can be re-established by either logging in again through the provider, or using the client-initiated account linking API. 12.12. Identity broker logout When logout from Red Hat Single Sign-On is triggered, Red Hat Single Sign-On will send a request to the external identity provider that was used to login to Keycloak, and the user will be logged out from this identity provider as well. It is possible to skip this behavior and avoid logout at the external identity provider. See adapter logout documentation for more details.
|
[
"oc create -f <(echo ' kind: OAuthClient apiVersion: v1 metadata: name: kc-client 1 secret: \"...\" 2 redirectURIs: - \"http://www.example.com/\" 3 grantMethod: prompt 4 ')",
"oc create -f <(echo ' kind: OAuthClient apiVersion: v1 metadata: name: keycloak-broker 1 secret: \"...\" 2 redirectURIs: - \"<copy pasted Redirect URI from OpenShift 4 Identity Providers page>\" 3 grantMethod: prompt 4 ')",
"http[s]://{host:port}/auth/realms/{realm-name}/broker/{broker-alias}/endpoint/descriptor",
"GET /myapplication.com?kc_idp_hint=facebook HTTP/1.1 Host: localhost:8080",
"var keycloak = new Keycloak('keycloak.json'); keycloak.createLoginUrl({ idpHint: 'facebook' });",
"GET /auth/realms/{realm}/broker/{provider_alias}/token HTTP/1.1 Host: localhost:8080 Authorization: Bearer <KEYCLOAK ACCESS TOKEN>"
] |
https://docs.redhat.com/en/documentation/red_hat_single_sign-on/7.4/html/server_administration_guide/identity_broker
|
Chapter 1. Testing Camel K integration locally
|
Chapter 1. Testing Camel K integration locally This chapter provides details on how to use Camel jBang to locally test a Camel K integration. Section 1.1, "Using Camel jBang to locally test a Camel K integration" 1.1. Using Camel jBang to locally test a Camel K integration Testing is one of the main operations performed repeatedly while building any application. With the advent of Camel JBang , we have a unified place that can be used to perform testing and fine tuning locally before moving to a higher environment. Testing or fine tuning an integration directly connected to a Cloud Native environment is a bit cumbersome. You must be connected to the cluster, or alternatively, you need a local Kubernetes cluster running on your machine (Minikube, Kind etc). Most of the time, the aspects inherent to the cluster fine tuning are arriving late in the development. Therefore, it is good to have a ligther way of testing our applications locally and, move to a deployment stage where we can apply that tuning, typical to a cloud native environment. kamel local is the command used to test an Integration locally in the past. However, it overlaps the effort done by Camel community to having a single CLI that is used to test locally any Camel application independently, where this is going to be deployed. 1.1.1. Camel JBang installation Firstly, we need to install and get familiar with jbang and camel CLIs. You can follow the official documentation about Camel JBang to install the CLIs to your local environment. After this, we can see how to test an Integration for Camel K with Camel JBang. 1.1.2. Simple application development The first application we develop is a simple one, and it defines the process you must follow when testing any Integration that is eventually deployed in Kubernetes via Camel K. verify the target version of Camel in your Camel K installation. With this information we can ensure to test locally against the same version that we will be later deploying in a cluster. The commands above are useful to find out what is the Camel version used by the runtime in your cluster Camel K installation. Our target is Camel version 3.18.3. The easiest way to initialize a Camel route is to run camel init command: At this stage, we can edit the file with the logic we need for our integration, or, simply run it: A local java process starts with a Camel application running. No need to create a Maven project, all the boilerplate is on Camel JBang! However, you may notice that the Camel version used is different from the one we want to target. This is because your Camel JBang is using a different version of Camel. No worry, we can re-run this application but specifying the Camel version we want to run: Note Camel JBang uses a default Camel version and if you want you can use -Dcamel.jbang.version option to explicitly set a Camel version overwriting the default. The step is to run it in a Kubernetes cluster where Camel K is installed. Let us use the Camel K plugin for Camel JBang here instead of kamel CLI. This way, you can use the same JBang tooling to both run the Camel K integration locally and on the K8s cluster with the operator. The JBang plugin documentation can be found here: Camel JBang Kubernetes . You see that the Camel K operator takes care to do the necessary transformation and build the Integration and related resources according to the expected lifecycle. Once this is live, you can follow up with the operations you usually do on a deployed Integration. The benefit of this process is that you need not worry about the remote cluster until you are satisfied with your Integration tuned locally. 1.1.3. Fine tuning for Cloud Once your Integration ready, you must take care about the kind of tuning, related to cluster deployment. Having this, you need not worry on deployment details at an early stage of the development. Or you can even have a separation of roles in your company where the domain expert may develop the integration locally and the cluster expert may do the deployment at a later stage. Let us see an example about how to develop an integration that will later need some fine tuning in the cluster. import org.apache.camel.builder.RouteBuilder; public class MyJBangRoute extends RouteBuilder { @Override public void configure() throws Exception { from("file:/tmp/input") .convertBodyTo(String.class) .log("Processing file USD{headers.CamelFileName} with content: USD{body}") /* .filter(simple("USD{body} !contains 'checked'")) .log("WARN not checked: USD{body}") .to("file:/tmp/discarded") .end() .to("file:/tmp/output"); */ .choice() .when(simple("USD{body} !contains 'checked'")) .log("WARN not checked!") .to("file:/tmp/discarded") .otherwise() .to("file:/tmp/output") .end(); } } There is a process that is in charge to write files into a directory. You must filter those files based on their content. We have left the code comments on purpose because it was the way we developed iteratively. We tested something locally with Camel JBang, until we came to the final version of the integration. We had tested the Filter EIP but while testing we needed a Content Based Router EIP instead. It must sound a familiar process as it happens probably every time we develop something. Now that we are ready, we run a last round of testing locally via Camel JBang: We have tested adding files on the input directory. Ready to promote to my development cluster! Use the Camel K JBang plugin here to run the integration on K8s so you do not need to switch tooling. Run the following command: The Integration started correctly, but we are using a file system that is local to the Pod where the Integration is running. 1.1.3.1. Kubernetes fine tuning Now, let us configure our application for the cloud. Cloud Native development must take into consideration a series of challenges that are implicit in the way how this new paradigm works (as a reference see the 12 factors ). Kubernetes can be sometimes a bit difficult to fine tune. Many resources to edit and check. Camel K provide a user friendly way to apply most of the tuning your application needs directly in the kamel run command (or in the modeline ). You must get familiar with Camel K Traits . In this case we want to use certain volumes we had availed in our cluster. We can use the --volume option (syntactic sugar of mount trait ) and enable them easily. We can read and write on those volumes from some other Pod : it depends on the architecture of our Integration process. You must iterate this tuning as well, but at least, now that the internals of the route have been polished locally you must focus on deployment aspects only. And, once you are ready with this, take the benefit of kamel promote to move your Integration through various stages of development . 1.1.4. How to test Kamelet locally? Another benefit of Camel JBang is the ability to test a Kamelet locally. Until now, the easiest possibility to test a Kamelet was to upload to a Kubernetes cluster and to run some Integration using it via Camel K. Let us develop a simple Kamelet for this scope. It is a Coffee source we are using to generate random coffee events. apiVersion: camel.apache.org/v1 kind: Kamelet metadata: name: coffee-source annotations: camel.apache.org/kamelet.support.level: "Stable" camel.apache.org/catalog.version: "4.7.0-SNAPSHOT" camel.apache.org/kamelet.icon: "data:image/svg+xml;base64,<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<svg xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns="http://www.w3.org/2000/svg" height="92pt" width="92pt" version="1.0" xmlns:cc="http://creativecommons.org/ns#" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:dc="http://purl.org/dc/elements/1.1/">
	<defs>
		<linearGradient id="a">
			<stop stop-color="#ffffff" stop-opacity=".5" offset="0"/>
			<stop stop-color="#ffffff" stop-opacity=".1" offset="1"/>
		</linearGradient>
		<linearGradient id="d" y2="62.299" xlink:href="#a" gradientUnits="userSpaceOnUse" y1="33.61" gradientTransform="matrix(.78479 0 0 1.2742 -25.691 -8.5635)" x2="95.689" x1="59.099"/>
		<linearGradient id="c" y2="241.09" xlink:href="#a" gradientUnits="userSpaceOnUse" y1="208.04" gradientTransform="matrix(1.9777 0 0 .50563 -25.691 -8.5635)" x2="28.179" x1="17.402"/>
		<linearGradient id="b" y2="80.909" xlink:href="#a" gradientUnits="userSpaceOnUse" y1="55.988" gradientTransform="matrix(1.5469 0 0 .64647 -25.691 -8.5635)" x2="87.074" x1="70.063"/>
	</defs>
	<path stroke-linejoin="round" d="m12.463 24.886c2.352 1.226 22.368 5.488 33.972 5.226 16.527 0.262 30.313-6.049 32.927-7.055 0 1.433-2.307 10.273-2.614 15.679 0 5.448 1.83 28.415 2.091 33.711 0.868 6.178 2.704 13.861 4.443 19.077 1.829 3.553-23.563 9.856-34.757 10.456-12.602 0.78-38.937-4.375-37.369-8.366 0-3.968 3.659-13.383 3.659-19.599 0.522-6.025-0.262-23.273-0.262-30.836-0.261-6.78-1.053-12.561-2.09-18.293z" fill-rule="evenodd" stroke="#000000" stroke-width="1pt" fill="#fbd900"/>
	<path d="m10.633 94.659c-5.5851-1.331-7.8786 10.111-1.8288 12.021 6.3678 3.75 29.703 7.06 39.199 6.27 11.101-0.26 31.192-4.44 35.801-8.36 6.134-3.92 5.466-13.066 0-12.021-3.278 3.658-26.699 8.881-36.585 9.411-9.223 0.78-30.749-2.53-36.586-7.321z" fill-rule="evenodd" stroke="#000000" stroke-width="1pt" fill="#fbf3bf"/>
	<path stroke-linejoin="bevel" d="m77.382 34.046c1.245-3.212 9.639-6.972 12.364-7.516 4.686-1.05 12.384-1.388 16.764 4.28 7.94 10.323 6.76 28.626 2.86 34.638-2.78 5.104-9.371 10.282-14.635 11.878-5.151 1.533-12.707 2.661-14.333 3.711-0.35-1.296-1.327-7.388-1.38-9.071 1.95 0.128 7.489-0.893 11.695-1.868 3.902-0.899 6.45-3.274 9.333-6.222 5-4.7 4.35-21.16 0.54-25.057-2.233-2.262-6.849-3.904-9.915-3.323-4.992 1.032-13.677 7.366-13.677 6.98-0.508-2.08-0.25-6.159 0.384-8.43z" fill-rule="evenodd" stroke="#000000" stroke-width="1.25" fill="#fbf3bf"/>
	<path stroke-linejoin="round" d="m32.022 38.368c1.655 1.206-1.355 16.955-0.942 28.131 0.414 14.295 1.444 23.528-0.521 24.635-3.108 1.675-9.901-0.135-12.046-2.42-1.273-1.507 1.806-10.24 2.013-16.429-0.414-8.711-1.703-33.303-0.461-34.778 2.252-2.053 9.681-1.152 11.957 0.861z" fill-rule="evenodd" stroke="#000000" stroke-width="1.25" fill="#fbe600"/>
	<path d="m40.612 39.037c-1.478 1.424-0.063 19.625-0.063 22.559 0.305 3.808-1.101 27.452-0.178 28.954 1.848 2.122 10.216 2.442 13.001-0.356 1.505-1.875-0.478-22.544-0.478-27.68 0-5.51 1.407-22.052-0.44-23.58-2.033-2.149-8.44-3.18-11.842 0.103z" fill-rule="evenodd" stroke="#000000" stroke-width="1pt" fill="#fbe600"/>
	<path stroke-linejoin="round" d="m60.301 37.593c-1.658 1.256 1.179 15.8 1.194 26.982 0.137 14.299-1.245 24.662 0.824 25.709 3.268 1.578 10.881-1.542 13-3.891 1.253-1.545-1.411-10.179-2.082-16.358-0.984-8.164 0.148-33.128-1.189-34.564-2.402-1.984-9.482 0.04-11.747 2.122z" fill-rule="evenodd" stroke="#000000" stroke-width="1.25" fill="#fbe600"/>
	<path d="m53.582 31.12c-4.989 1.109-36.588-3.141-39.729-4.804 0.924 4.62 3.141 45.272 1.663 49.892 0.185 2.032-3.88 15.152-3.695 17.924 17.184-68.37 39.728-48.968 41.761-63.012z" fill-rule="evenodd" fill="url(#d)"/>
	<path d="m10.027 95.309c-3.0515-0.897-5.2053 6.821-2.872 9.151 5.743 2.69 13.282-2.33 38.23-1.61-12.743-0.36-31.589-2.874-35.358-7.541z" fill-rule="evenodd" fill="url(#c)"/>
	<path d="m78.59 33.567c4.487-4.488 8.794-5.564 13.999-6.462 8.791-2.333 14.901 3.769 16.871 11.846-4.49-7.179-10.23-8.256-14.178-8.436-4.128 0.718-15.795 7.898-16.872 9.154s-0.718-4.128 0.18-6.102z" fill-rule="evenodd" fill="url(#b)"/>
	<path stroke-linejoin="round" d="m11.408 77.34c2.3832 1.159 4.2811-1.5693 3.4649-3.0303 0.91503 0.08658 1.7948-0.3254 1.7948-1.7948 0.72044-0.72044-0.36461-1.8544-0.36461-2.7357-0.99354-0.99354 0.0056-2.165 0.0056-3.7257 0-1.5535 0.89742-2.5024 0.89742-4.1281 0-2.3611 2.0594-1.1807 0.89742-4.6666 1.0882-0.42455 2.2741-1.4845 0.89742-2.6923 2.1601-0.23952 3.2186-2.3542 0.53845-4.6666 4.0734 0-4.2302-8.7305 2.6923-6.9999 2.222-0.55551 1.7948-2.2151 1.7948-4.3076 2.8717 3.9487 6.8954 2.6213 7.5383 0 1.3486 4.3998 10.59 2.5869 10.59-2.8717 0.17948 6.7502 7.1177 3.4046 8.4358 3.9486-1.6154 1.8662 1.5841 9.0796 4.3076 9.1537-6.3097 4.7323-5.1729 13.001 2.5128 14.538 3.8938 0 5.3845-3.2785 5.3845-7.8973 1.2564 2.6447 6.972 4.2797 6.9999-0.17948 2.8717 5.5446 6.4959-1.4704 4.3076-2.1538 5.0256 1.9057 3.2128-6.9811 1.3785-9.056 2.8718-0.91448 1.8346-7.6184 0.0574-9.7898 2.6212 2.6652 6.7385-0.83112 6.282-5.923 1.228 3.4671 9.1475-0.36828 3.7692-8.4358 0-1.5451-4.4871-1.7488-5.564-0.53845-0.01541-5.4461-4.0997-9.6921-6.9999-8.6152 1.799-2.6932-9.048-4.8999-11.308-0.539 1.351-5.7012-13.81-9.3336-14.179-6.1029-1.748-2.5128-11.771-2.5586-14.718 6.2819 0-4.8606-16.309-6.9999-15.974 0.35897-3.4899-2.4331-9.2274 0.35897-8.7947 3.2307-5.3845-2.7034-7.842 9.5611-3.4102 10.231-2.5128 2.2624-2.6923 11.311 0.53845 11.128-1.9743 2.1297-0.89742 8.4366 1.2564 8.6152-1.6794 2.3206 0.2457 13.674 7.1794 11.846 0 2.5234 0.70877 4.6941-0.17948 7.3588 0 1.5455-0.89742 2.8528-0.89742 4.4871 0.37206 0.74412-1.2597 2.7244 0.53845 3.9486-4.2167 1.7593-3.3024 4.4642-1.6701 5.7226z" fill-rule="evenodd" stroke="#000000" stroke-width="1pt" fill="#ffffff"/>
	<path stroke-linejoin="round" d="m11.317 32.574c-1.5098-1.65 1.221-7.04 4.242-6.763 0.689-2.474 2.586-2.892 4.688-2.187-1.048-2.045 1.503-3.992 3.75-1.682 1.517-2.622 4.677-4.645 6.356-3.231-0.132-3.373 6.063-6.794 8.331-3.837 0 0.606-0.362 1.875 0 1.875" stroke="#000000" stroke-linecap="round" stroke-width="1pt" fill="none"/>
	<path stroke-linejoin="round" d="m48.372 22.374c-0.104-4.721 14.009-8.591 11.25-0.313 1.269-0.634 6.875-1.299 5.844 2.314 4.123-0.466 10.39 1.104 6.662 6.688 2.396 1.806 1.331 6.696-0.319 5.061" stroke="#000000" stroke-linecap="round" stroke-width="1pt" fill="none"/>
</svg>
" camel.apache.org/provider: "Apache Software Foundation" camel.apache.org/kamelet.group: "Coffees" camel.apache.org/kamelet.namespace: "Dataset" labels: camel.apache.org/kamelet.type: "source" spec: definition: title: "Coffee Source" description: "Produces periodic events about coffees!" type: object properties: period: title: Period description: The time interval between two events type: integer default: 5000 types: out: mediaType: application/json dependencies: - "camel:timer" - "camel:http" - "camel:kamelet" template: from: uri: "timer:coffee" parameters: period: "{{period}}" steps: - to: https://random-data-api.com/api/coffee/random_coffee - removeHeaders: pattern: '*' - to: "kamelet:sink" To test it, we can use a simple Integration to log its content: - from: uri: "kamelet:coffee-source?period=5000" steps: - log: "USD{body}" Now we can run: This is a boost while you are programming a Kamelet, because you can have a quick feedback without the need of a cluster. Once ready, you can continue your development as usual uploading the Kamelet to the cluster and using in your Camel K integrations.
|
[
"kamel version -a -v | grep Runtime Runtime Version: 3.8.1 kubectl get camelcatalog camel-catalog-3.8.1 -o yaml | grep camel\\.version camel.version: 3.8.1",
"camel init HelloJBang.java",
"camel run HelloJBang.java 2022-11-23 12:11:05.407 INFO 52841 --- [ main] org.apache.camel.main.MainSupport : Apache Camel (JBang) 3.18.1 is starting 2022-11-23 12:11:05.470 INFO 52841 --- [ main] org.apache.camel.main.MainSupport : Using Java 11.0.17 with PID 52841. Started by squake in /home/squake/workspace/jbang/camel-blog 2022-11-23 12:11:07.537 INFO 52841 --- [ main] e.camel.impl.engine.AbstractCamelContext : Apache Camel 3.18.1 (CamelJBang) is starting 2022-11-23 12:11:07.675 INFO 52841 --- [ main] e.camel.impl.engine.AbstractCamelContext : Routes startup (started:1) 2022-11-23 12:11:07.676 INFO 52841 --- [ main] e.camel.impl.engine.AbstractCamelContext : Started java (timer://java) 2022-11-23 12:11:07.676 INFO 52841 --- [ main] e.camel.impl.engine.AbstractCamelContext : Apache Camel 3.18.1 (CamelJBang) started in 397ms (build:118ms init:140ms start:139ms JVM-uptime:3s) 2022-11-23 12:11:08.705 INFO 52841 --- [ - timer://java] HelloJBang.java:14 : Hello Camel from java 2022-11-23 12:11:09.676 INFO 52841 --- [ - timer://java] HelloJBang.java:14 : Hello Camel from java",
"jbang run -Dcamel.jbang.version=3.18.3 camel@apache/camel run HelloJBang.java [1] 2022-11-23 11:13:02,825 INFO [org.apa.cam.imp.eng.AbstractCamelContext] (main) Apache Camel 3.18.3 (camel-1) started in 70ms (build:0ms init:61ms start:9ms)",
"import org.apache.camel.builder.RouteBuilder; public class MyJBangRoute extends RouteBuilder { @Override public void configure() throws Exception { from(\"file:/tmp/input\") .convertBodyTo(String.class) .log(\"Processing file USD{headers.CamelFileName} with content: USD{body}\") /* .filter(simple(\"USD{body} !contains 'checked'\")) .log(\"WARN not checked: USD{body}\") .to(\"file:/tmp/discarded\") .end() .to(\"file:/tmp/output\"); */ .choice() .when(simple(\"USD{body} !contains 'checked'\")) .log(\"WARN not checked!\") .to(\"file:/tmp/discarded\") .otherwise() .to(\"file:/tmp/output\") .end(); } }",
"jbang run -Dcamel.jbang.version=3.18.3 camel@apache/camel run MyJBangRoute.java 2022-11-23 12:19:11.516 INFO 55909 --- [ main] org.apache.camel.main.MainSupport : Apache Camel (JBang) 3.18.3 is starting 2022-11-23 12:19:11.592 INFO 55909 --- [ main] org.apache.camel.main.MainSupport : Using Java 11.0.17 with PID 55909. Started by squake in /home/squake/workspace/jbang/camel-blog 2022-11-23 12:19:14.020 INFO 55909 --- [ main] e.camel.impl.engine.AbstractCamelContext : Apache Camel 3.18.3 (CamelJBang) is starting 2022-11-23 12:19:14.220 INFO 55909 --- [ main] e.camel.impl.engine.AbstractCamelContext : Routes startup (started:1) 2022-11-23 12:19:14.220 INFO 55909 --- [ main] e.camel.impl.engine.AbstractCamelContext : Started route1 (file:///tmp/input) 2022-11-23 12:19:14.220 INFO 55909 --- [ main] e.camel.impl.engine.AbstractCamelContext : Apache Camel 3.18.3 (CamelJBang) started in 677ms (build:133ms init:344ms start:200ms JVM-uptime:3s) 2022-11-23 12:19:27.757 INFO 55909 --- [le:///tmp/input] MyJBangRoute.java:11 : Processing file file_1669202367381 with content: some entry 2022-11-23 12:19:27.758 INFO 55909 --- [le:///tmp/input] MyJBangRoute:21 : WARN not checked! 2022-11-23 12:19:32.276 INFO 55909 --- [le:///tmp/input] MyJBangRoute.java:11 : Processing file file_1669202372252 with content: some entry checked",
"camel k run MyJBangRoute.java",
"kamel run MyJBangRoute.java --volume my-pv-claim-input:/tmp/input --volume my-pv-claim-output:/tmp/output --volume my-pv-claim-discarded:/tmp/discarded --dev [1] 2022-11-23 11:39:26,281 INFO [route1] (Camel (camel-1) thread #1 - file:///tmp/input) Processing file file_1669203565971 with content: some entry [1] [1] 2022-11-23 11:39:26,303 INFO [route1] (Camel (camel-1) thread #1 - file:///tmp/input) WARN not checked! [1] 2022-11-23 11:39:32,322 INFO [route1] (Camel (camel-1) thread #1 - file:///tmp/input) Processing file file_1669203571981 with content: some entry checked",
"apiVersion: camel.apache.org/v1 kind: Kamelet metadata: name: coffee-source annotations: camel.apache.org/kamelet.support.level: \"Stable\" camel.apache.org/catalog.version: \"4.7.0-SNAPSHOT\" camel.apache.org/kamelet.icon: \"data:image/svg+xml;base64,<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<svg xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns="http://www.w3.org/2000/svg" height="92pt" width="92pt" version="1.0" xmlns:cc="http://creativecommons.org/ns#" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:dc="http://purl.org/dc/elements/1.1/">
	<defs>
		<linearGradient id="a">
			<stop stop-color="#ffffff" stop-opacity=".5" offset="0"/>
			<stop stop-color="#ffffff" stop-opacity=".1" offset="1"/>
		</linearGradient>
		<linearGradient id="d" y2="62.299" xlink:href="#a" gradientUnits="userSpaceOnUse" y1="33.61" gradientTransform="matrix(.78479 0 0 1.2742 -25.691 -8.5635)" x2="95.689" x1="59.099"/>
		<linearGradient id="c" y2="241.09" xlink:href="#a" gradientUnits="userSpaceOnUse" y1="208.04" gradientTransform="matrix(1.9777 0 0 .50563 -25.691 -8.5635)" x2="28.179" x1="17.402"/>
		<linearGradient id="b" y2="80.909" xlink:href="#a" gradientUnits="userSpaceOnUse" y1="55.988" gradientTransform="matrix(1.5469 0 0 .64647 -25.691 -8.5635)" x2="87.074" x1="70.063"/>
	</defs>
	<path stroke-linejoin="round" d="m12.463 24.886c2.352 1.226 22.368 5.488 33.972 5.226 16.527 0.262 30.313-6.049 32.927-7.055 0 1.433-2.307 10.273-2.614 15.679 0 5.448 1.83 28.415 2.091 33.711 0.868 6.178 2.704 13.861 4.443 19.077 1.829 3.553-23.563 9.856-34.757 10.456-12.602 0.78-38.937-4.375-37.369-8.366 0-3.968 3.659-13.383 3.659-19.599 0.522-6.025-0.262-23.273-0.262-30.836-0.261-6.78-1.053-12.561-2.09-18.293z" fill-rule="evenodd" stroke="#000000" stroke-width="1pt" fill="#fbd900"/>
	<path d="m10.633 94.659c-5.5851-1.331-7.8786 10.111-1.8288 12.021 6.3678 3.75 29.703 7.06 39.199 6.27 11.101-0.26 31.192-4.44 35.801-8.36 6.134-3.92 5.466-13.066 0-12.021-3.278 3.658-26.699 8.881-36.585 9.411-9.223 0.78-30.749-2.53-36.586-7.321z" fill-rule="evenodd" stroke="#000000" stroke-width="1pt" fill="#fbf3bf"/>
	<path stroke-linejoin="bevel" d="m77.382 34.046c1.245-3.212 9.639-6.972 12.364-7.516 4.686-1.05 12.384-1.388 16.764 4.28 7.94 10.323 6.76 28.626 2.86 34.638-2.78 5.104-9.371 10.282-14.635 11.878-5.151 1.533-12.707 2.661-14.333 3.711-0.35-1.296-1.327-7.388-1.38-9.071 1.95 0.128 7.489-0.893 11.695-1.868 3.902-0.899 6.45-3.274 9.333-6.222 5-4.7 4.35-21.16 0.54-25.057-2.233-2.262-6.849-3.904-9.915-3.323-4.992 1.032-13.677 7.366-13.677 6.98-0.508-2.08-0.25-6.159 0.384-8.43z" fill-rule="evenodd" stroke="#000000" stroke-width="1.25" fill="#fbf3bf"/>
	<path stroke-linejoin="round" d="m32.022 38.368c1.655 1.206-1.355 16.955-0.942 28.131 0.414 14.295 1.444 23.528-0.521 24.635-3.108 1.675-9.901-0.135-12.046-2.42-1.273-1.507 1.806-10.24 2.013-16.429-0.414-8.711-1.703-33.303-0.461-34.778 2.252-2.053 9.681-1.152 11.957 0.861z" fill-rule="evenodd" stroke="#000000" stroke-width="1.25" fill="#fbe600"/>
	<path d="m40.612 39.037c-1.478 1.424-0.063 19.625-0.063 22.559 0.305 3.808-1.101 27.452-0.178 28.954 1.848 2.122 10.216 2.442 13.001-0.356 1.505-1.875-0.478-22.544-0.478-27.68 0-5.51 1.407-22.052-0.44-23.58-2.033-2.149-8.44-3.18-11.842 0.103z" fill-rule="evenodd" stroke="#000000" stroke-width="1pt" fill="#fbe600"/>
	<path stroke-linejoin="round" d="m60.301 37.593c-1.658 1.256 1.179 15.8 1.194 26.982 0.137 14.299-1.245 24.662 0.824 25.709 3.268 1.578 10.881-1.542 13-3.891 1.253-1.545-1.411-10.179-2.082-16.358-0.984-8.164 0.148-33.128-1.189-34.564-2.402-1.984-9.482 0.04-11.747 2.122z" fill-rule="evenodd" stroke="#000000" stroke-width="1.25" fill="#fbe600"/>
	<path d="m53.582 31.12c-4.989 1.109-36.588-3.141-39.729-4.804 0.924 4.62 3.141 45.272 1.663 49.892 0.185 2.032-3.88 15.152-3.695 17.924 17.184-68.37 39.728-48.968 41.761-63.012z" fill-rule="evenodd" fill="url(#d)"/>
	<path d="m10.027 95.309c-3.0515-0.897-5.2053 6.821-2.872 9.151 5.743 2.69 13.282-2.33 38.23-1.61-12.743-0.36-31.589-2.874-35.358-7.541z" fill-rule="evenodd" fill="url(#c)"/>
	<path d="m78.59 33.567c4.487-4.488 8.794-5.564 13.999-6.462 8.791-2.333 14.901 3.769 16.871 11.846-4.49-7.179-10.23-8.256-14.178-8.436-4.128 0.718-15.795 7.898-16.872 9.154s-0.718-4.128 0.18-6.102z" fill-rule="evenodd" fill="url(#b)"/>
	<path stroke-linejoin="round" d="m11.408 77.34c2.3832 1.159 4.2811-1.5693 3.4649-3.0303 0.91503 0.08658 1.7948-0.3254 1.7948-1.7948 0.72044-0.72044-0.36461-1.8544-0.36461-2.7357-0.99354-0.99354 0.0056-2.165 0.0056-3.7257 0-1.5535 0.89742-2.5024 0.89742-4.1281 0-2.3611 2.0594-1.1807 0.89742-4.6666 1.0882-0.42455 2.2741-1.4845 0.89742-2.6923 2.1601-0.23952 3.2186-2.3542 0.53845-4.6666 4.0734 0-4.2302-8.7305 2.6923-6.9999 2.222-0.55551 1.7948-2.2151 1.7948-4.3076 2.8717 3.9487 6.8954 2.6213 7.5383 0 1.3486 4.3998 10.59 2.5869 10.59-2.8717 0.17948 6.7502 7.1177 3.4046 8.4358 3.9486-1.6154 1.8662 1.5841 9.0796 4.3076 9.1537-6.3097 4.7323-5.1729 13.001 2.5128 14.538 3.8938 0 5.3845-3.2785 5.3845-7.8973 1.2564 2.6447 6.972 4.2797 6.9999-0.17948 2.8717 5.5446 6.4959-1.4704 4.3076-2.1538 5.0256 1.9057 3.2128-6.9811 1.3785-9.056 2.8718-0.91448 1.8346-7.6184 0.0574-9.7898 2.6212 2.6652 6.7385-0.83112 6.282-5.923 1.228 3.4671 9.1475-0.36828 3.7692-8.4358 0-1.5451-4.4871-1.7488-5.564-0.53845-0.01541-5.4461-4.0997-9.6921-6.9999-8.6152 1.799-2.6932-9.048-4.8999-11.308-0.539 1.351-5.7012-13.81-9.3336-14.179-6.1029-1.748-2.5128-11.771-2.5586-14.718 6.2819 0-4.8606-16.309-6.9999-15.974 0.35897-3.4899-2.4331-9.2274 0.35897-8.7947 3.2307-5.3845-2.7034-7.842 9.5611-3.4102 10.231-2.5128 2.2624-2.6923 11.311 0.53845 11.128-1.9743 2.1297-0.89742 8.4366 1.2564 8.6152-1.6794 2.3206 0.2457 13.674 7.1794 11.846 0 2.5234 0.70877 4.6941-0.17948 7.3588 0 1.5455-0.89742 2.8528-0.89742 4.4871 0.37206 0.74412-1.2597 2.7244 0.53845 3.9486-4.2167 1.7593-3.3024 4.4642-1.6701 5.7226z" fill-rule="evenodd" stroke="#000000" stroke-width="1pt" fill="#ffffff"/>
	<path stroke-linejoin="round" d="m11.317 32.574c-1.5098-1.65 1.221-7.04 4.242-6.763 0.689-2.474 2.586-2.892 4.688-2.187-1.048-2.045 1.503-3.992 3.75-1.682 1.517-2.622 4.677-4.645 6.356-3.231-0.132-3.373 6.063-6.794 8.331-3.837 0 0.606-0.362 1.875 0 1.875" stroke="#000000" stroke-linecap="round" stroke-width="1pt" fill="none"/>
	<path stroke-linejoin="round" d="m48.372 22.374c-0.104-4.721 14.009-8.591 11.25-0.313 1.269-0.634 6.875-1.299 5.844 2.314 4.123-0.466 10.39 1.104 6.662 6.688 2.396 1.806 1.331 6.696-0.319 5.061" stroke="#000000" stroke-linecap="round" stroke-width="1pt" fill="none"/>
</svg>
\" camel.apache.org/provider: \"Apache Software Foundation\" camel.apache.org/kamelet.group: \"Coffees\" camel.apache.org/kamelet.namespace: \"Dataset\" labels: camel.apache.org/kamelet.type: \"source\" spec: definition: title: \"Coffee Source\" description: \"Produces periodic events about coffees!\" type: object properties: period: title: Period description: The time interval between two events type: integer default: 5000 types: out: mediaType: application/json dependencies: - \"camel:timer\" - \"camel:http\" - \"camel:kamelet\" template: from: uri: \"timer:coffee\" parameters: period: \"{{period}}\" steps: - to: https://random-data-api.com/api/coffee/random_coffee - removeHeaders: pattern: '*' - to: \"kamelet:sink\"",
"- from: uri: \"kamelet:coffee-source?period=5000\" steps: - log: \"USD{body}\"",
"camel run --local-kamelet-dir=</path/to/local/kamelets/dir> coffee-integration.yaml 2022-11-24 11:27:29.634 INFO 39527 --- [ main] org.apache.camel.main.MainSupport : Apache Camel (JBang) 3.18.1 is starting 2022-11-24 11:27:29.706 INFO 39527 --- [ main] org.apache.camel.main.MainSupport : Using Java 11.0.17 with PID 39527. Started by squake in /home/squake/workspace/jbang/camel-blog 2022-11-24 11:27:31.391 INFO 39527 --- [ main] e.camel.impl.engine.AbstractCamelContext : Apache Camel 3.18.1 (CamelJBang) is starting 2022-11-24 11:27:31.590 INFO 39527 --- [ main] org.apache.camel.main.BaseMainSupport : Property-placeholders summary 2022-11-24 11:27:31.590 INFO 39527 --- [ main] org.apache.camel.main.BaseMainSupport : [coffee-source.kamelet.yaml] period=5000 2022-11-24 11:27:31.590 INFO 39527 --- [ main] org.apache.camel.main.BaseMainSupport : [coffee-source.kamelet.yaml] templateId=coffee-source 2022-11-24 11:27:31.591 INFO 39527 --- [ main] e.camel.impl.engine.AbstractCamelContext : Routes startup (started:2) 2022-11-24 11:27:31.591 INFO 39527 --- [ main] e.camel.impl.engine.AbstractCamelContext : Started route1 (kamelet://coffee-source) 2022-11-24 11:27:31.591 INFO 39527 --- [ main] e.camel.impl.engine.AbstractCamelContext : Started coffee-source-1 (timer://coffee) 2022-11-24 11:27:31.591 INFO 39527 --- [ main] e.camel.impl.engine.AbstractCamelContext : Apache Camel 3.18.1 (CamelJBang) started in 1s143ms (build:125ms init:819ms start:199ms JVM-uptime:2s) 2022-11-24 11:27:33.297 INFO 39527 --- [ - timer://coffee] coffee-integration.yaml:4 : {\"id\":3648,\"uid\":\"712d4f54-3314-4129-844e-9915002ecbb7\",\"blend_name\":\"Winter Cowboy\",\"origin\":\"Lekempti, Ethiopia\",\"variety\":\"Agaro\",\"notes\":\"delicate, juicy, sundried tomato, fresh bread, lemonade\",\"intensifier\":\"juicy\"}"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.7/html/testing_guide_camel_k/testing-camel-k-integration
|
Appendix A. Configuration reference
|
Appendix A. Configuration reference As a storage administrator, you can set various options for the Ceph Object Gateway. These options contain default values. If you do not specify each option, then the default value is set automatically. To set specific values for these options, update the configuration database by using the ceph config set client.rgw OPTION VALUE command. A.1. General settings Name Description Type Default rgw_data Sets the location of the data files for Ceph Object Gateway. String /var/lib/ceph/radosgw/USDcluster-USDid rgw_enable_apis Enables the specified APIs. String s3, s3website, swift, swift_auth, admin, sts, iam, notifications rgw_cache_enabled Whether the Ceph Object Gateway cache is enabled. Boolean true rgw_cache_lru_size The number of entries in the Ceph Object Gateway cache. Integer 10000 rgw_socket_path The socket path for the domain socket. FastCgiExternalServer uses this socket. If you do not specify a socket path, Ceph Object Gateway will not run as an external server. The path you specify here must be the same as the path specified in the rgw.conf file. String N/A rgw_host The host for the Ceph Object Gateway instance. Can be an IP address or a hostname. String 0.0.0.0 rgw_port Port the instance listens for requests. If not specified, Ceph Object Gateway runs external FastCGI. String None rgw_dns_name The DNS name of the served domain. See also the hostnames setting within zone groups. String None rgw_script_uri The alternative value for the SCRIPT_URI if not set in the request. String None rgw_request_uri The alternative value for the REQUEST_URI if not set in the request. String None rgw_print_continue Enable 100-continue if it is operational. Boolean true rgw_remote_addr_param The remote address parameter. For example, the HTTP field containing the remote address, or the X-Forwarded-For address if a reverse proxy is operational. String REMOTE_ADDR rgw_op_thread_timeout The timeout in seconds for open threads. Integer 600 rgw_op_thread_suicide_timeout The timeout in seconds before a Ceph Object Gateway process dies. Disabled if set to 0 . Integer 0 rgw_thread_pool_size The size of the thread pool. Integer 512 threads. rgw_num_control_oids The number of notification objects used for cache synchronization between different rgw instances. Integer 8 rgw_init_timeout The number of seconds before Ceph Object Gateway gives up on initialization. Integer 30 rgw_mime_types_file The path and location of the MIME types. Used for Swift auto-detection of object types. String /etc/mime.types rgw_gc_max_objs The maximum number of objects that may be handled by garbage collection in one garbage collection processing cycle. Integer 32 rgw_gc_obj_min_wait The minimum wait time before the object may be removed and handled by garbage collection processing. Integer 2 * 3600 rgw_gc_processor_max_time The maximum time between the beginning of two consecutive garbage collection processing cycles. Integer 3600 rgw_gc_processor_period The cycle time for garbage collection processing. Integer 3600 rgw_s3 success_create_obj_status The alternate success status response for create-obj . Integer 0 rgw_resolve_cname Whether rgw should use the DNS CNAME record of the request hostname field (if hostname is not equal to rgw_dns name ). Boolean false rgw_object_stripe_size The size of an object stripe for Ceph Object Gateway objects. Integer 4 << 20 rgw_extended_http_attrs Add a new set of attributes that could be set on an object. These extra attributes can be set through HTTP header fields when putting the objects. If set, these attributes will return as HTTP fields when doing GET/HEAD on the object. String None. For example: "content_foo, content_bar" rgw_exit_timeout_secs Number of seconds to wait for a process before exiting unconditionally. Integer 120 rgw_get_obj_window_size The window size in bytes for a single object request. Integer 16 << 20 rgw_get_obj_max_req_size The maximum request size of a single get operation sent to the Ceph Storage Cluster. Integer 4 << 20 rgw_relaxed_s3_bucket_names Enables relaxed S3 bucket names rules for zone group buckets. Boolean false rgw_list buckets_max_chunk The maximum number of buckets to retrieve in a single operation when listing user buckets. Integer 1000 rgw_override_bucket_index_max_shards The number of shards for the bucket index object. A value of 0 indicates there is no sharding. Red Hat does not recommend setting a value too large (for example, 1000 ) as it increases the cost for bucket listing. This variable should be set in the [client] or the [global] section so it is automatically applied to radosgw-admin commands. Integer 0 rgw_curl_wait_timeout_ms The timeout in milliseconds for certain curl calls. Integer 1000 rgw_copy_obj_progress Enables output of object progress during long copy operations. Boolean true rgw_copy_obj_progress_every_bytes The minimum bytes between copy progress output. Integer 1024 * 1024 rgw_admin_entry The entry point for an admin request URL. String admin rgw_content_length_compat Enable compatibility handling of FCGI requests with both CONTENT_LENGTH AND HTTP_CONTENT_LENGTH set. Boolean false rgw_bucket_default_quota_max_objects The default maximum number of objects per bucket. This value is set on new users if no other quota is specified. It has no effect on existing users. This variable should be set in the [client] or the [global] section so it is automatically applied to radosgw-admin commands. Integer -1 rgw_bucket_quota_ttl The amount of time in seconds cached quota information is trusted. After this timeout, the quota information will be re-fetched from the cluster. Integer 600 rgw_user_quota_bucket_sync_interval The amount of time in seconds bucket quota information is accumulated before syncing to the cluster. During this time, other RGW instances will not see the changes in bucket quota stats from operations on this instance. Integer 180 rgw_user_quota_sync_interval The amount of time in seconds user quota information is accumulated before syncing to the cluster. During this time, other RGW instances will not see the changes in user quota stats from operations on this instance. Integer 3600 * 24 log_meta A zone parameter to determine whether or not the gateway logs the metadata operations. Boolean false log_data A zone parameter to determine whether or not the gateway logs the data operations. Boolean false sync_from_all A radosgw-admin command to set or unset whether zone syncs from all zonegroup peers. Boolean false A.2. About pools Ceph zones map to a series of Ceph Storage Cluster pools. Manually Created Pools vs. Generated Pools If the user key for the Ceph Object Gateway contains write capabilities, the gateway has the ability to create pools automatically. This is convenient for getting started. However, the Ceph Object Storage Cluster uses the placement group default values unless they were set in the Ceph configuration file. Additionally, Ceph will use the default CRUSH hierarchy. These settings are NOT ideal for production systems. The default pools for the Ceph Object Gateway's default zone include: .rgw.root .default.rgw.control .default.rgw.meta .default.rgw.log .default.rgw.buckets.index .default.rgw.buckets.data .default.rgw.buckets.non-ec The Ceph Object Gateway creates pools on a per zone basis. If you create the pools manually, prepend the zone name. The system pools store objects related to, for example, system control, logging, and user information. By convention, these pool names have the zone name prepended to the pool name. .<zone-name>.rgw.control : The control pool. .<zone-name>.log : The log pool contains logs of all bucket/container and object actions, such as create, read, update, and delete. .<zone-name>.rgw.buckets.index : This pool stores the index of the buckets. .<zone-name>.rgw.buckets.data : This pool stores the data of the buckets. .<zone-name>.rgw.meta : The metadata pool stores user_keys and other critical metadata. .<zone-name>.meta:users.uid : The user ID pool contains a map of unique user IDs. .<zone-name>.meta:users.keys : The keys pool contains access keys and secret keys for each user ID. .<zone-name>.meta:users.email : The email pool contains email addresses associated with a user ID. .<zone-name>.meta:users.swift : The Swift pool contains the Swift subuser information for a user ID. Ceph Object Gateways store data for the bucket index ( index_pool ) and bucket data ( data_pool ) in placement pools. These may overlap; that is, you may use the same pool for the index and the data. The index pool for default placement is {zone-name}.rgw.buckets.index and for the data pool for default placement is {zone-name}.rgw.buckets . Name Description Type Default rgw_zonegroup_root_pool The pool for storing all zone group-specific information. String .rgw.root rgw_zone_root_pool The pool for storing zone-specific information. String .rgw.root A.3. Lifecycle settings As a storage administrator, you can set various bucket lifecycle options for a Ceph Object Gateway. These options contain default values. If you do not specify each option, then the default value is set automatically. To set specific values for these options, update the configuration database by using the ceph config set client.rgw OPTION VALUE command. Name Description Type Default rgw_lc_debug_interval For developer use only to debug lifecycle rules by scaling expiration rules from days into an interval in seconds. Red Hat recommends that this option not be used in a production cluster. Integer -1 rgw_lc_lock_max_time The timeout value used internally by the Ceph Object Gateway. Integer 90 rgw_lc_max_objs Controls the sharding of the RADOS Gateway internal lifecycle work queues, and should only be set as part of a deliberate resharding workflow. Red Hat recommends not changing this setting after the setup of your cluster, without first contacting Red Hat support. Integer 32 rgw_lc_max_rules The number of lifecycle rules to include in one, per bucket, lifecycle configuration document. The Amazon Web Service (AWS) limit is 1000 rules. Integer 1000 rgw_lc_max_worker The number of lifecycle worker threads to run in parallel, processing bucket and index shards simultaneously. Red Hat does not recommend setting a value larger than 10 without contacting Red Hat support. Integer 3 rgw_lc_max_wp_worker The number of buckets that each lifecycle worker thread can process in parallel. Red Hat does not recommend setting a value larger than 10 without contacting Red Hat Support. Integer 3 rgw_lc_thread_delay A delay, in milliseconds, that can be injected into shard processing at several points. The default value is 0. Setting a value from 10 to 100 ms would reduce CPU utilization on RADOS Gateway instances and reduce the proportion of workload capacity of lifecycle threads relative to ingest if saturation is being observed. Integer 0 A.4. Swift settings Name Description Type Default rgw_enforce_swift_acls Enforces the Swift Access Control List (ACL) settings. Boolean true rgw_swift_token_expiration The time in seconds for expiring a Swift token. Integer 24 * 3600 rgw_swift_url The URL for the Ceph Object Gateway Swift API. String None rgw_swift_url_prefix The URL prefix for the Swift API, for example, http://fqdn.com/swift . swift N/A rgw_swift_auth_url Default URL for verifying v1 auth tokens (if not using internal Swift auth). String None rgw_swift_auth_entry The entry point for a Swift auth URL. String auth A.5. Logging settings Name Description Type Default debug_rgw_datacache Low level D3N logs can be enabled by the debug_rgw_datacache subsystem (up to debug_rgw_datacache = 30 ) Integer 1/5 rgw_log_nonexistent_bucket Enables Ceph Object Gateway to log a request for a non-existent bucket. Boolean false rgw_log_object_name The logging format for an object name. See manpage date for details about format specifiers. Date %Y-%m-%d-%H-%i-%n rgw_log_object_name_utc Whether a logged object name includes a UTC time. If false , it uses the local time. Boolean false rgw_usage_max_shards The maximum number of shards for usage logging. Integer 32 rgw_usage_max_user_shards The maximum number of shards used for a single user's usage logging. Integer 1 rgw_enable_ops_log Enable logging for each successful Ceph Object Gateway operation. Boolean false rgw_enable_usage_log Enable the usage log. Boolean false rgw_ops_log_rados Whether the operations log should be written to the Ceph Storage Cluster backend. Boolean true rgw_ops_log_socket_path The Unix domain socket for writing operations logs. String None rgw_ops_log_data-backlog The maximum data backlog data size for operations logs written to a Unix domain socket. Integer 5 << 20 rgw_usage_log_flush_threshold The number of dirty merged entries in the usage log before flushing synchronously. Integer 1024 rgw_usage_log_tick_interval Flush pending usage log data every n seconds. Integer 30 rgw_intent_log_object_name The logging format for the intent log object name. See manpage date for details about format specifiers. Date %Y-%m-%d-%i-%n rgw_intent_log_object_name_utc Whether the intent log object name includes a UTC time. If false , it uses the local time. Boolean false rgw_data_log_window The data log entries window in seconds. Integer 30 rgw_data_log_changes_size The number of in-memory entries to hold for the data changes log. Integer 1000 rgw_data_log_num_shards The number of shards (objects) on which to keep the data changes log. Integer 128 rgw_data_log_obj_prefix The object name prefix for the data log. String data_log rgw_replica_log_obj_prefix The object name prefix for the replica log. String replica log rgw_md_log_max_shards The maximum number of shards for the metadata log. Integer 64 rgw_log_http_headers Comma-delimited list of HTTP headers to include with ops log entries. Header names are case insensitive, and use the full header name with words separated by underscores. String None Note Changing the rgw_data_log_num_shards value is not supported. A.6. Keystone settings Name Description Type Default rgw_keystone_url The URL for the Keystone server. String None rgw_keystone_admin_token The Keystone admin token (shared secret). String None rgw_keystone_accepted_roles The roles required to serve requests. String Member, admin rgw_keystone_token_cache_size The maximum number of entries in each Keystone token cache. Integer 10000 A.7. Keystone integration configuration options You can integrate your configuration options into Keystone. See below for a detailed description of the available Keystone integration configuration options: Important After updating the Ceph configuration file, you must copy the new Ceph configuration file to all Ceph nodes in the storage cluster. rgw_s3_auth_use_keystone Description If set to true , the Ceph Object Gateway will authenticate users using Keystone. Type Boolean Default false nss_db_path Description The path to the NSS database. Type String Default "" rgw_keystone_url Description The URL for the administrative RESTful API on the Keystone server. Type String Default "" rgw_keystone_admin_token Description The token or shared secret that is configured internally in Keystone for administrative requests. Type String Default "" rgw_keystone_admin_user Description The keystone admin user name. Type String Default "" rgw_keystone_admin_password Description The keystone admin user password. Type String Default "" rgw_keystone_admin_tenant Description The Keystone admin user tenant for keystone v2.0. Type String Default "" rgw_keystone_admin_project Description the keystone admin user project for keystone v3. Type String Default "" rgw_trust_forwarded_https Description When a proxy in front of the Ceph Object Gateway is used for SSL termination, it does not whether incoming http connections are secure. Enable this option to trust the forwarded and X-forwarded headers sent by the proxy when determining when the connection is secure. This is mainly required for server-side encryption. Type Boolean Default false rgw_swift_account_in_url Description Whether the Swift account is encoded in the URL path. You must set this option to true and update the Keystone service catalog if you want the Ceph Object Gateway to support publicly-readable containers and temporary URLs. Type Boolean Default false rgw_keystone_admin_domain Description The Keystone admin user domain. Type String Default "" rgw_keystone_api_version Description The version of the Keystone API to use. Valid options are 2 or 3 . Type Integer Default 2 rgw_keystone_accepted_roles Description The roles required to serve requests. Type String Default member, Member, admin , rgw_keystone_accepted_admin_roles Description The list of roles allowing a user to gain administrative privileges. Type String Default ResellerAdmin, swiftoperator rgw_keystone_token_cache_size Description The maximum number of entries in the Keystone token cache. Type Integer Default 10000 rgw_keystone_verify_ssl Description If true Ceph will try to verify Keystone's SSL certificate. Type Boolean Default true rgw_keystone_implicit_tenants Description Create new users in their own tenants of the same name. Set this to true or false under most circumstances. For compatibility with versions of Red Hat Ceph Storage, it is also possible to set this to s3 or swift . This has the effect of splitting the identity space such that only the indicated protocol will use implicit tenants. Some older versions of Red Hat Ceph Storage only supported implicit tenants with Swift. Type String Default false rgw_max_attr_name_len Description The maximum length of metadata name. 0 skips the check. Type Size Default 0 rgw_max_attrs_num_in_req Description The maximum number of metadata items that can be put with a single request. Type uint Default 0 rgw_max_attr_size Description The maximum length of metadata value. 0 skips the check Type Size Default 0 rgw_swift_versioning_enabled Description Enable Swift versioning. Type Boolean Default 0 or 1 rgw_keystone_accepted_reader_roles Description List of roles that can only be used for reads. Type String Default "" rgw_swift_enforce_content_length Description Send content length when listing containers Type String Default false` A.8. LDAP settings Name Description Type Example rgw_ldap_uri A space-separated list of LDAP servers in URI format. String ldaps://<ldap.your.domain> rgw_ldap_searchdn The LDAP search domain name, also known as base domain. String cn=users,cn=accounts,dc=example,dc=com rgw_ldap_binddn The gateway will bind with this LDAP entry (user match). String uid=admin,cn=users,dc=example,dc=com rgw_ldap_secret A file containing credentials for rgw_ldap_binddn . String /etc/openldap/secret rgw_ldap_dnattr LDAP attribute containing Ceph object gateway user names (to form binddns). String uid
| null |
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/object_gateway_guide/configuration-reference
|
3.19. Searching for Events
|
3.19. Searching for Events The following table describes all search options you can use to search for events. Auto-completion is offered for many options as appropriate. Table 3.15. Searching for Events Property (of resource or resource-type) Type Description (Reference) Vms. Vms-prop Depends on property type The property of the virtual machines associated with the event. Hosts. hosts-prop Depends on property type The property of the hosts associated with the event. Templates. templates-prop Depends on property type The property of the templates associated with the event. Users. users-prop Depends on property type The property of the users associated with the event. Clusters. clusters-prop Depends on property type The property of the clusters associated with the event. Volumes. Volumes-prop Depends on property type The property of the volumes associated with the event. type List Type of the event. severity List The severity of the event: Warning/Error/Normal. message String Description of the event type. time List Day the event occurred. usrname String The user name associated with the event. event_host String The host associated with the event. event_vm String The virtual machine associated with the event. event_template String The template associated with the event. event_storage String The storage associated with the event. event_datacenter String The data center associated with the event. event_volume String The volume associated with the event. correlation_id Integer The identification number of the event. sortby List Sorts the returned results by one of the resource properties. page Integer The page number of results to display. Example Events: Vms.name = testdesktop and Hosts.name = gonzo.example.com This example returns a list of events, where the event occurred on the virtual machine named testdesktop while it was running on the host gonzo.example.com .
| null |
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/searching_for_events
|
Chapter 1. Introduction to Red Hat OpenShift certification policies
|
Chapter 1. Introduction to Red Hat OpenShift certification policies The Red Hat Openshift certification policy guide covers the technical and operational certification requirements to obtain and maintain Red Hat certification for a software product on Red Hat OpenShift. To know the test requirements and procedure for achieving this certification, see the Red Hat Software certification workflow guide . 1.1. Audience Red Hat OpenShift certification is offered to commercial software vendors that deliver cloud-native software products targeting Red Hat OpenShift as the deployment platform. 1.2. Create value for customers The certification process allows partners to continuously verify if their product meets Red Hat standards of interoperability, security, and life cycle management when deployed on Red Hat OpenShift. Our customers benefit from a trusted application and infrastructure stack, tested and jointly supported by Red Hat and the Partner. 1.3. Certification and Partner validation Red Hat offers you the ability to certify or validate your products. Red Hat-certified products undergo thorough testing and are collaboratively supported with you. These products meet your standards and Red Hat's criteria, including functionality, interoperability, lifecycle management, security, and support requirements. Partner-validated products are tested and supported by you. Validation allows you to enable and publish your software offerings more quickly. However, by definition, validated workloads do not include the full thoroughness of Red Hat certification. We encourage you to continue efforts toward stabilization, upstream acceptance, Red Hat enablement, and Red Hat certification. Note The validation option is not available for all infrastructure software. Understanding the differences between certification and validation, along with the capabilities, limitations, and achievements of your products, is essential for you and your customers. 1.4. Support responsibilities Red Hat customers receive the best support experience when using components from our robust ecosystem of certified enterprise hardware, software, and cloud partners. Red Hat provides support for Red Hat-certified products and Red Hat software according to the Red Hat Service Level Agreement (SLA). If a certified or validated third-party component is involved in a customer issue, Red Hat collaborates with you to resolve it according to the Third party support policy . Red Hat does not stipulate customer support policies. However, we require your support in assisting customers with diagnosing and resolving issues related to the functionality, interoperability, lifecycle management, and security of your software in conjunction with ours. Being listed as certified or validated in the Red Hat Ecosystem Catalog indicates your commitment to supporting your products and providing reliable solutions for our joint customers, adhering to your policies with Red Hat products. 1.5. Targeted products for certification and validation Certification and validation is available for workload products that target Red Hat OpenShift as their deployment platform. Red Hat recommends that you manage the product's life cycle by using technology native to Kubernetes, such as Operators or Helm charts, because they deliver a user experience that is closely integrated with Red Hat OpenShift. For these two options, certification covers the packaging format and compatibility with the Red Hat OpenShift tools. If your product uses a different technology for installation and upgrades, certification will be limited to the container images. Products that deliver infrastructure services for Red Hat OpenShift, storage services provided through a CSI driver or networking services integrated via a CNI plugin, require tight integration with the platform's life cycle management. Therefore, they do not qualify for validation and must be managed by an Operator and demonstrate compliance with the corresponding Kubernetes APIs. Specialized certification and validation is available for cloud-native network functions for the Telecommunications market. Additional resources For more information about building Operators that meet the certification criteria, see Certified Operator Guide . 1.6. Prerequisites and process overview 1.6.1. Prerequisites Join the Red Hat Partner Connect program. Accept the standard Partner Agreements along with the terms and conditions specific to containerized software. Enter basic information about your company and the products you wish to certify through the Red Hat Partner Connect portal. Test your product to verify that it behaves as intended on OpenShift. Support OpenShift as a platform for the product being certified or validated, and establish a support relationship with Red Hat. You can do this through the multi-vendor support network of TSANet , or through a custom support agreement. 1.6.2. Process overview The Red Hat certification and partner validation procedures are outlined below. See the Red Hat Software Certification Workflow Guide for details on how to complete each step listed below. 1.6.2.1. Certification procedure Complete the prerequisites On Red Hat Connect Create your product Create and associate components for each product component Complete the product listing checklist Complete the certification requirements for each component as appropriate Container Images Helm charts Operators Conduct functional certification (if appropriate) OpenShift badges On Red Hat Connect Complete the component certification checklist for each component Publish your components Publish your product 1.6.2.2. Validation procedure Complete the prerequisites On Red Hat Connect Create your product Complete your Product List details Create a Validation request Complete the product listing checklist Complete the validation checklist Fill in the questionnaire Wait for Red Hat to review and approve the questionnaire On Red Hat Connect Publish your product Additional resources For more information about onboarding and managing your account, see General Program Guide for Partners . 1.7. Supported Red Hat OpenShift versions Red Hat OpenShift software certification and validation is available for releases of Red Hat OpenShift v4.x which are in the Full, Maintenance or Extended Update Support (EUS) life cycle phases. Additional resources For more information, see Red Hat OpenShift Container Platform Lifecycle Policy . 1.8. Supported architectures Certification is available for all supported architectures for Red Hat OpenShift Container Platform v4.x releases. At present this includes x86_64, s390x, ppc64, and aarch64. Certifications are awarded to a single architecture. Apply for multiple certifications if your product supports more than one architecture. 1.9. Lifecycle Red Hat certifications and validations remain valid for 12 months or until the corresponding Red Hat OpenShift Container Platform (RHOCP) v.4.x release exits the Extended Update Support Term 2 of the RHOCP lifecycle, whichever time period is shorter. To maintain the certification or partner validation status, you must recertify or revalidate on newer versions of your software or Red Hat OpenStack Platform (RHOSP). Certifications, validations and associated products remain published until they are no longer valid or the Red Hat product version is retired from the catalog. OpenShift now includes Extended Update Support (EUS) and Extended Update Support Term 2 (EUS-T2) options, which require changes to product build and release practices, as well as ISV certification. The window for certifications or validations opens with the GA release of the minor version through the OpenShift Extended Update Support based on the even and odd schedules. Even-numbered minor releases : The window will close at the end of either Extended Update Support or Extended Update Support Term 2, whichever is later. Odd-numbered minor releases : The window will close simultaneously with the preceding even-numbered release (e.g., 4.15 will close with 4.14 at the end of Extended Update Support Term 2). This is because even-numbered releases have a longer support lifecycle. Although an odd-numbered release reaches its end-of-life sooner, it becomes relevant during updates between even-numbered releases in the extended update support phase, serving as intermediate steps. Certifying software for these end-of-life releases ensures that critical bug fixes and security updates are available, preventing regressions during customer updates. Note ISV certification tools and product build or release engineering will support odd-numbered minor releases for longer than indicated on the lifecycle page. Supporting EUS-to-EUS updates is crucial for a seamless customer experience. For example, if you certify an ISV software version 1 on RHOCP 4.14 and version 3 is on RHOCP 4.16, dual certification on 4.14 can be beneficial. This is particularly relevant if the software supports direct upgrades from version 1 to version 3 and version 3 is compatible with RHOCP 4.14. In such cases, certifying version 3 on RHOCP 4.14 allows customers to upgrade their software while remaining on RHOCP 4.14 before transitioning to 4.16, ensuring a smoother process and minimizing disruptions. Refer to the Red Hat OpenShift Container Platform Life Cycle Policy for more details. Red Hat encourages you to plan even-to-even updates for OpenShift releases reaching "Maintenance Support Ends". However, this extended product support offers flexibility for any necessary update paths, such as progressing from OpenShift 4.14 through 4.15 to 4.16, ensuring uninterrupted support for our joint customers. 1.9.1. Recertification Red Hat OpenShift Container Platform innovates at a rapid pace, as is reflected in the Red Hat OpenShift Container Platform Lifecycle Policy. It is important to approach OpenShift and certification testing as a continuous process to ensure ongoing interoperability and support for customers. You must recertify your products in the following scenarios: Certifying another version of your product Making another version of your product available through a Red Hat in-product software catalog (index/registry/repo/etc.) Supporting another version or architecture of Red Hat OpenShift Container Platform (RHOCP) Making a material change to your product's build, installation, upgrade process, or adding new functionality Your product contains a critical Common Vulnerability and Exposure (CVE) that is older than 3 months Your product contains an important CVE that is older than 12 months Your product was certified more than 12 months ago A material change is any change that alters the outcome of certification testing, impacts a customer's experience of your product on OpenShift, impacts a customer's experience of OpenShift, or impacts a customer's ability to utilize any part of their Red Hat subscription(s). Red Hat provides multiple mechanisms to monitor certified containers for critical vulnerabilities (CVEs). This allows you to continuously monitor and identify for critical vulnerabilities. These mechanisms will help you determine when to rebuild and recertify. Additional resources For more information about Container scanning and keeping your images up to date, see Container Health Index . For more information about implementing a CI/CD process for container builds certification, see Using OpenShift Pipelines CI/CD and Quay for Container Certification . 1.9.2. Additional validations Red Hat OpenShift Container Platform innovates at a rapid pace, as is reflected in the Red Hat OpenShift Container Platform Lifecycle Policy. It is important to approach OpenShift and certification testing as a continuous process to ensure ongoing interoperability and support for customers. You require additional validations for your products in the following scenarios: Validating another version of your product Supporting another version or architecture of Red Hat OpenShift Container Platform Making a material change to your product Your product was validated more than 12 months ago A material change is any change that alters the outcome of certification testing, impacts a customer's experience of your product on OpenShift, impacts a customer's experience of OpenShift, or impacts a customer's ability to utilize any part of their Red Hat subscription(s). 1.10. Product naming and branding Select unique product names and branding that comply with Red Hat trademark guidelines . This helps our joint customers clearly identify products that use Red Hat Marks and their source. This policy covers all catalog listings and individual product components. 1.11. Software dependencies A key benefit of Red Hat Certification is support. Ensure to check if you in coordination with Red Hat, support all the software necessary for customers to deploy and utilize your software on RHOCP. 1.12. Functional verification You must ensure that your product, with the same packages and components that you submitted for certification, works with the configurations supported by RHOCP . Ensure your product does not make any modifications to the RHOCP stack, including the host operating system, other than configuration changes that are covered in the product documentation. Unauthorized changes can impact the support from Red Hat. Red Hat encourages you to check that your product is capable of running on any node in a OpenShift cluster, regardless of the type of Red Hat OpenShift deployment (bare metal, virtual environment, or cloud service), installation process (IPI or UPI), or cluster size. If there are any limitations due to dependencies on hardware components, public cloud services, or any other cluster configuration requirements, these should be mentioned in the product's documentation which should be linked to your product catalog listing . Additional resources To learn more about creating product listings, see Creating a Product Listing . 1.13. Security contexts To reduce security risks, ensure that your products run in the most restrictive Security Context Constraint (SCC). For example, restricted-v2 for Red Hat OpenShift 4.12. If the product requires additional privileges, Red Hat recommends using the most restrictive SCC that provides the right capabilities. This configuration information should be included as part of the product documentation, and the certification tests must be conducted using the same security settings that are recommended for end users. Additional resources For more information, see Security context constraints in Red Hat OpenShift . 1.14. Publishing 1.14.1. Red Hat Ecosystem Catalog When you complete the Red Hat Enterprise Linux (RHEL) Certification or the partner validation workflow, Red Hat publishes an entry in the Red Hat Ecosystem Catalog . This includes a product entry and relevant information collected during the process. Products with certifications include the associated component data for containers, Helm charts and operators. Products without any certifications do not include component information. 1.14.2. Red Hat in-product catalogs Red Hat products include in-product catalogs for direct use by customers. The in-product catalogs allow customers to install, run, and manage Red Hat certified software from the appropriate interface within the Red Hat product. For example, the ISV container registry, the chart repository and operator index. Additionally, products managed by Operators or Helm charts are also included in the corresponding Red Hat Certified Operator Index or the OpenShift Helm Charts Repository , to facilitate installation and upgrades by default. Both are presented to Red Hat OpenShift users through the OpenShift console. You may opt out of being published in the Red Hat Certified Operator Index or Helm Charts Repository if it is not compatible with your software distribution model. You are responsible for testing the alternate distribution and update processes, which must be included in your product documentation. Similarly, you may opt out of being published in the Red Hat in-product catalogs if it is not compatible with your software distribution model. You are responsible for testing the alternate distribution and update processes, which must be included in your product documentation.
| null |
https://docs.redhat.com/en/documentation/red_hat_software_certification/2025/html/red_hat_openshift_software_certification_policy_guide/assembly-introduction-to-openshift-certification_openshift-sw-cert-policy
|
Chapter 6. Configuring the database
|
Chapter 6. Configuring the database 6.1. Using an existing PostgreSQL database If you are using an externally managed PostgreSQL database, you must manually enable the pg_trgm extension for a successful deployment. Important You must not use the same externally managed PostgreSQL database for both Red Hat Quay and Clair deployments. Your PostgreSQL database must also not be shared with other workloads, as it might exhaust the natural connection limit on the PostgreSQL side when connection-intensive workloads, like Red Hat Quay or Clair, contend for resources. Additionally, pgBouncer is not supported with Red Hat Quay or Clair, so it is not an option to resolve this issue. Use the following procedure to deploy an existing PostgreSQL database. Procedure Create a config.yaml file with the necessary database fields. For example: Example config.yaml file: DB_URI: postgresql://test-quay-database:postgres@test-quay-database:5432/test-quay-database Create a Secret using the configuration file: Create a QuayRegistry.yaml file which marks the postgres component as unmanaged and references the created Secret . For example: Example quayregistry.yaml file apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: config-bundle-secret components: - kind: postgres managed: false steps Continue to the following sections to deploy the registry. 6.1.1. Database configuration This section describes the database configuration fields available for Red Hat Quay deployments. 6.1.1.1. Database URI With Red Hat Quay, connection to the database is configured by using the required DB_URI field. The following table describes the DB_URI configuration field: Table 6.1. Database URI Field Type Description DB_URI (Required) String The URI for accessing the database, including any credentials. Example DB_URI field: postgresql://quayuser:[email protected]:5432/quay 6.1.1.2. Database connection arguments Optional connection arguments are configured by the DB_CONNECTION_ARGS parameter. Some of the key-value pairs defined under DB_CONNECTION_ARGS are generic, while others are database specific. The following table describes database connection arguments: Table 6.2. Database connection arguments Field Type Description DB_CONNECTION_ARGS Object Optional connection arguments for the database, such as timeouts and SSL/TLS. .autorollback Boolean Whether to use thread-local connections. Should always be true .threadlocals Boolean Whether to use auto-rollback connections. Should always be true 6.1.1.2.1. PostgreSQL SSL/TLS connection arguments With SSL/TLS, configuration depends on the database you are deploying. The following example shows a PostgreSQL SSL/TLS configuration: DB_CONNECTION_ARGS: sslmode: verify-ca sslrootcert: /path/to/cacert The sslmode option determines whether, or with, what priority a secure SSL/TLS TCP/IP connection will be negotiated with the server. There are six modes: Table 6.3. SSL/TLS options Mode Description disable Your configuration only tries non-SSL/TLS connections. allow Your configuration first tries a non-SSL/TLS connection. Upon failure, tries an SSL/TLS connection. prefer (Default) Your configuration first tries an SSL/TLS connection. Upon failure, tries a non-SSL/TLS connection. require Your configuration only tries an SSL/TLS connection. If a root CA file is present, it verifies the certificate in the same way as if verify-ca was specified. verify-ca Your configuration only tries an SSL/TLS connection, and verifies that the server certificate is issued by a trusted certificate authority (CA). verify-full Only tries an SSL/TLS connection, and verifies that the server certificate is issued by a trusted CA and that the requested server hostname matches that in the certificate. For more information on the valid arguments for PostgreSQL, see Database Connection Control Functions . 6.1.1.2.2. MySQL SSL/TLS connection arguments The following example shows a sample MySQL SSL/TLS configuration: DB_CONNECTION_ARGS: ssl: ca: /path/to/cacert Information on the valid connection arguments for MySQL is available at Connecting to the Server Using URI-Like Strings or Key-Value Pairs . 6.1.2. Using the managed PostgreSQL database With Red Hat Quay 3.9, if your database is managed by the Red Hat Quay Operator, updating from Red Hat Quay 3.8 3.9 automatically handles upgrading PostgreSQL 10 to PostgreSQL 13. Important Users with a managed database are required to upgrade their PostgreSQL database from 10 13. If your Red Hat Quay and Clair databases are managed by the Operator, the database upgrades for each component must succeed for the 3.9.0 upgrade to be successful. If either of the database upgrades fail, the entire Red Hat Quay version upgrade fails. This behavior is expected. If you do not want the Red Hat Quay Operator to upgrade your PostgreSQL deployment from PostgreSQL 10 13, you must set the PostgreSQL parameter to managed: false in your quayregistry.yaml file. For more information about setting your database to unmanaged, see Using an existing Postgres database . Important It is highly recommended that you upgrade to PostgreSQL 13. PostgreSQL 10 had its final release on November 10, 2022 and is no longer supported. For more information, see the PostgreSQL Versioning Policy . If you want your PostgreSQL database to match the same version as your Red Hat Enterprise Linux (RHEL) system, see Migrating to a RHEL 8 version of PostgreSQL for RHEL 8 or Migrating to a RHEL 9 version of PostgreSQL for RHEL 9. For more information about the Red Hat Quay 3.8 3.9 procedure, see Upgrading the Red Hat Quay Operator overview . 6.1.2.1. PostgreSQL database recommendations The Red Hat Quay team recommends the following for managing your PostgreSQL database. Database backups should be performed regularly using either the supplied tools on the PostgreSQL image or your own backup infrastructure. The Red Hat Quay Operator does not currently ensure that the PostgreSQL database is backed up. Restoring the PostgreSQL database from a backup must be done using PostgreSQL tools and procedures. Be aware that your Quay pods should not be running while the database restore is in progress. Database disk space is allocated automatically by the Red Hat Quay Operator with 50 GiB. This number represents a usable amount of storage for most small to medium Red Hat Quay installations but might not be sufficient for your use cases. Resizing the database volume is currently not handled by the Red Hat Quay Operator. 6.2. Configuring external Redis Use the content in this section to set up an external Redis deployment. 6.2.1. Using an unmanaged Redis database Use the following procedure to set up an external Redis database. Procedure Create a config.yaml file using the following Redis fields: # ... BUILDLOGS_REDIS: host: <quay-server.example.com> port: 6379 ssl: false # ... USER_EVENTS_REDIS: host: <quay-server.example.com> port: 6379 ssl: false # ... Enter the following command to create a secret using the configuration file: USD oc create secret generic --from-file config.yaml=./config.yaml config-bundle-secret Create a quayregistry.yaml file that sets the Redis component to unmanaged and references the created secret: apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: config-bundle-secret components: - kind: redis managed: false # ... Deploy the Red Hat Quay registry. Additional resources Redis configuration fields 6.2.2. Using unmanaged Horizontal Pod Autoscalers Horizontal Pod Autoscalers (HPAs) are now included with the Clair , Quay , and Mirror pods, so that they now automatically scale during load spikes. As HPA is configured by default to be managed, the number of Clair , Quay , and Mirror pods is set to two. This facilitates the avoidance of downtime when updating or reconfiguring Red Hat Quay through the Operator or during rescheduling events. Note There is a known issue when disabling the HorizontalPodAutoscaler component and attempting to edit the HPA resource itself and increase the value of the minReplicas field. When attempting this setup, Quay application pods are scaled out by the unmanaged HPA and, after 60 seconds, the replica count is reconciled by the Red Hat Quay Operator. As a result, HPA pods are continuously created and then removed by the Operator. To resolve this issue, you should upgrade your Red Hat Quay deployment to at least version 3.12.5 or 3.13.1 and then use the following example to avoid the issue. This issue will be fixed in a future version of Red Hat Quay. For more information, see PROJQUAY-6474 . 6.2.2.1. Disabling the Horizontal Pod Autoscaler To disable autoscaling or create your own HorizontalPodAutoscaler component, specify the component as unmanaged in the QuayRegistry custom resource definition. To avoid the known issue noted above, you must modify the QuayRegistry CRD object and set the replicas equal to null for the quay , clair , and mirror components. Procedure Edit the QuayRegistry CRD to include the following replicas: null for the quay component: USD oc edit quayregistry <quay_registry_name> -n <quay_namespace> apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: quay-registry namespace: quay-enterprise spec: components: - kind: horizontalpodautoscaler managed: false - kind: quay managed: true overrides: replicas: null 1 - kind: clair managed: true overrides: replicas: null - kind: mirror managed: true overrides: replicas: null # ... 1 After setting replicas: null in your QuayRegistry CRD, a new replica set might be generated because the deployment manifest of the Quay app is changed with replicas: 1 . Verification Create a customized HorizontalPodAutoscalers CRD and increase the minReplicas amount to a higher value, for exampe, 3 : kind: HorizontalPodAutoscaler apiVersion: autoscaling/v2 metadata: name: quay-registry-quay-app namespace: quay-enterprise spec: scaleTargetRef: kind: Deployment name: quay-registry-quay-app apiVersion: apps/v1 minReplicas: 3 maxReplicas: 20 metrics: - type: Resource resource: name: memory target: type: Utilization averageUtilization: 90 - type: Resource resource: name: cpu target: type: Utilization averageUtilization: 90 Ensure that your QuayRegistry application successfully starts by entering the following command: USD oc get pod | grep quay-app Example output quay-registry-quay-app-5b8fd49d6b-7wvbk 1/1 Running 0 34m quay-registry-quay-app-5b8fd49d6b-jslq9 1/1 Running 0 3m42s quay-registry-quay-app-5b8fd49d6b-pskpz 1/1 Running 0 43m quay-registry-quay-app-upgrade-llctl 0/1 Completed 0 51m Ensure that your HorizontalPodAutoscalers successfully starts by entering the following command: USD oc get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE quay-registry-quay-app Deployment/quay-registry-quay-app 67%/90%, 54%/90% 3 20 3 51m 6.2.3. Disabling the Route component Use the following procedure to prevent the Red Hat Quay Operator from creating a route. Procedure Set the component as managed: false in the quayregistry.yaml file: apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: components: - kind: route managed: false Edit the config.yaml file to specify that Red Hat Quay handles SSL/TLS. For example: # ... EXTERNAL_TLS_TERMINATION: false # ... SERVER_HOSTNAME: example-registry-quay-quay-enterprise.apps.user1.example.com # ... PREFERRED_URL_SCHEME: https # ... If you do not configure the unmanaged route correctly, the following error is returned: { { "kind":"QuayRegistry", "namespace":"quay-enterprise", "name":"example-registry", "uid":"d5879ba5-cc92-406c-ba62-8b19cf56d4aa", "apiVersion":"quay.redhat.com/v1", "resourceVersion":"2418527" }, "reason":"ConfigInvalid", "message":"required component `route` marked as unmanaged, but `configBundleSecret` is missing necessary fields" } Note Disabling the default route means you are now responsible for creating a Route , Service , or Ingress in order to access the Red Hat Quay instance. Additionally, whatever DNS you use must match the SERVER_HOSTNAME in the Red Hat Quay config. 6.2.4. Disabling the monitoring component If you install the Red Hat Quay Operator in a single namespace, the monitoring component is automatically set to managed: false . Use the following reference to explicitly disable monitoring. Unmanaged monitoring apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: components: - kind: monitoring managed: false Note Monitoring cannot be enabled when the Red Hat Quay Operator is installed in a single namespace. 6.2.5. Disabling the mirroring component To disable mirroring, use the following YAML configuration: Unmanaged mirroring example YAML configuration apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: components: - kind: mirroring managed: false
|
[
"DB_URI: postgresql://test-quay-database:postgres@test-quay-database:5432/test-quay-database",
"kubectl create secret generic --from-file config.yaml=./config.yaml config-bundle-secret",
"apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: config-bundle-secret components: - kind: postgres managed: false",
"DB_CONNECTION_ARGS: sslmode: verify-ca sslrootcert: /path/to/cacert",
"DB_CONNECTION_ARGS: ssl: ca: /path/to/cacert",
"BUILDLOGS_REDIS: host: <quay-server.example.com> port: 6379 ssl: false USER_EVENTS_REDIS: host: <quay-server.example.com> port: 6379 ssl: false",
"oc create secret generic --from-file config.yaml=./config.yaml config-bundle-secret",
"apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: config-bundle-secret components: - kind: redis managed: false",
"oc edit quayregistry <quay_registry_name> -n <quay_namespace>",
"apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: quay-registry namespace: quay-enterprise spec: components: - kind: horizontalpodautoscaler managed: false - kind: quay managed: true overrides: replicas: null 1 - kind: clair managed: true overrides: replicas: null - kind: mirror managed: true overrides: replicas: null",
"kind: HorizontalPodAutoscaler apiVersion: autoscaling/v2 metadata: name: quay-registry-quay-app namespace: quay-enterprise spec: scaleTargetRef: kind: Deployment name: quay-registry-quay-app apiVersion: apps/v1 minReplicas: 3 maxReplicas: 20 metrics: - type: Resource resource: name: memory target: type: Utilization averageUtilization: 90 - type: Resource resource: name: cpu target: type: Utilization averageUtilization: 90",
"oc get pod | grep quay-app",
"quay-registry-quay-app-5b8fd49d6b-7wvbk 1/1 Running 0 34m quay-registry-quay-app-5b8fd49d6b-jslq9 1/1 Running 0 3m42s quay-registry-quay-app-5b8fd49d6b-pskpz 1/1 Running 0 43m quay-registry-quay-app-upgrade-llctl 0/1 Completed 0 51m",
"oc get hpa",
"NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE quay-registry-quay-app Deployment/quay-registry-quay-app 67%/90%, 54%/90% 3 20 3 51m",
"apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: components: - kind: route managed: false",
"EXTERNAL_TLS_TERMINATION: false SERVER_HOSTNAME: example-registry-quay-quay-enterprise.apps.user1.example.com PREFERRED_URL_SCHEME: https",
"{ { \"kind\":\"QuayRegistry\", \"namespace\":\"quay-enterprise\", \"name\":\"example-registry\", \"uid\":\"d5879ba5-cc92-406c-ba62-8b19cf56d4aa\", \"apiVersion\":\"quay.redhat.com/v1\", \"resourceVersion\":\"2418527\" }, \"reason\":\"ConfigInvalid\", \"message\":\"required component `route` marked as unmanaged, but `configBundleSecret` is missing necessary fields\" }",
"apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: components: - kind: monitoring managed: false",
"apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: components: - kind: mirroring managed: false"
] |
https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/deploying_the_red_hat_quay_operator_on_openshift_container_platform/configuring-the-database-poc
|
Chapter 1. Overview
|
Chapter 1. Overview Troubleshooting OpenShift Data Foundation is written to help administrators understand how to troubleshoot and fix their Red Hat OpenShift Data Foundation cluster. Most troubleshooting tasks focus on either a fix or a workaround. This document is divided into chapters based on the errors that an administrator may encounter: Chapter 2, Downloading log files and diagnostic information using must-gather shows you how to use the must-gather utility in OpenShift Data Foundation. Chapter 4, Commonly required logs for troubleshooting shows you how to obtain commonly required log files for OpenShift Data Foundation. Chapter 7, Troubleshooting alerts and errors in OpenShift Data Foundation shows you how to identify the encountered error and perform required actions. Warning Red Hat does not support running Ceph commands in OpenShift Data Foundation clusters (unless indicated by Red Hat support or Red Hat documentation) as it can cause data loss if you run the wrong commands. In that case, the Red Hat support team is only able to provide commercially reasonable effort and may not be able to restore all the data in case of any data loss.
| null |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/troubleshooting_openshift_data_foundation/overview
|
Extension APIs
|
Extension APIs OpenShift Container Platform 4.16 Reference guide for extension APIs Red Hat OpenShift Documentation Team
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html-single/extension_apis/index
|
Chapter 4. Serving and chatting with the models
|
Chapter 4. Serving and chatting with the models To interact with various models on Red Hat Enterprise Linux AI you must serve the model, which hosts it on a server, then you can chat with the models. 4.1. Serving the model To interact with the models, you must first activate the model in a machine through serving. The ilab model serve commands starts a vLLM server that allows you to chat with the model. Prerequisites You installed RHEL AI with the bootable container image. You initialized InstructLab. You installed your preferred Granite LLMs. You have root user access on your machine. Procedure If you do not specify a model, you can serve the default model, granite-7b-redhat-lab , by running the following command: USD ilab model serve To serve a specific model, run the following command USD ilab model serve --model-path <model-path> Example command USD ilab model serve --model-path ~/.cache/instructlab/models/granite-8b-code-instruct Example output of when the model is served and ready INFO 2024-03-02 02:21:11,352 lab.py:201 Using model 'models/granite-8b-code-instruct' with -1 gpu-layers and 4096 max context size. Starting server process After application startup complete see http://127.0.0.1:8000/docs for API. Press CTRL+C to shut down the server. 4.1.1. Optional: Running ilab model serve as a service You can set up a systemd service so that the ilab model serve command runs as a running service. The systemd service runs the ilab model serve command in the background and restarts if it crashes or fails. You can configure the service to start upon system boot. Prerequisites You installed the Red Hat Enterprise Linux AI image on bare metal. You initialized InstructLab You downloaded your preferred Granite LLMs. You have root user access on your machine. Procedure. Create a directory for your systemd user service by running the following command: USD mkdir -p USDHOME/.config/systemd/user Create your systemd service file with the following example configurations: USD cat << EOF > USDHOME/.config/systemd/user/ilab-serve.service [Unit] Description=ilab model serve service [Install] WantedBy=multi-user.target default.target 1 [Service] ExecStart=ilab model serve --model-family granite Restart=always EOF 1 Specifies to start by default on boot. Reload the systemd manager configuration by running the following command: USD systemctl --user daemon-reload Start the ilab model serve systemd service by running the following command: USD systemctl --user start ilab-serve.service You can check that the service is running with the following command: USD systemctl --user status ilab-serve.service You can check the service logs by running the following command: USD journalctl --user-unit ilab-serve.service To allow the service to start on boot, run the following command: USD sudo loginctl enable-linger Optional: There are a few optional commands you can run for maintaining your systemd service. You can stop the ilab-serve system service by running the following command: USD systemctl --user stop ilab-serve.service You can prevent the service from starting on boot by removing the "WantedBy=multi-user.target default.target" from the USDHOME/.config/systemd/user/ilab-serve.service file. 4.1.2. Optional: Allowing access to a model from a secure endpoint You can serve an inference endpoint and allow others to interact with models provided with Red Hat Enterprise Linux AI on secure connections by creating a systemd service and setting up a nginx reverse proxy that exposes a secure endpoint. This allows you to share the secure endpoint with others so they can chat with the model over a network. The following procedure uses self-signed certifications, but it is recommended to use certificates issued by a trusted Certificate Authority (CA). Note The following procedure is supported only on bare metal platforms. Prerequisites You installed the Red Hat Enterprise Linux AI image on bare-metal. You initialized InstructLab You downloaded your preferred Granite LLMs. You have root user access on your machine. Procedure Create a directory for your certificate file and key by running the following command: USD mkdir -p `pwd`/nginx/ssl/ Create an OpenSSL configuration file with the proper configurations by running the following command: USD cat > openssl.cnf <<EOL [ req ] default_bits = 2048 distinguished_name = <req-distinguished-name> 1 x509_extensions = v3_req prompt = no [ req_distinguished_name ] C = US ST = California L = San Francisco O = My Company OU = My Division CN = rhelai.redhat.com [ v3_req ] subjectAltName = <alt-names> 2 basicConstraints = critical, CA:true subjectKeyIdentifier = hash authorityKeyIdentifier = keyid:always,issuer [ alt_names ] DNS.1 = rhelai.redhat.com 3 DNS.2 = www.rhelai.redhat.com 4 1 Specify the distinguished name for your requirements. 2 Specify the alternate name for your requirements. 3 4 Specify the server common name for RHEL AI. In the example, the server name is rhelai.redhat.com . Generate a self signed certificate with a Subject Alternative Name (SAN) enabled with the following commands: USD openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout `pwd`/nginx/ssl/rhelai.redhat.com.key -out `pwd`/nginx/ssl/rhelai.redhat.com.crt -config openssl.cnf USD openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout Create the Nginx Configuration file and add it to the `pwd /nginx/conf.d` by running the following command: mkdir -p `pwd`/nginx/conf.d echo 'server { listen 8443 ssl; server_name <rhelai.redhat.com> 1 ssl_certificate /etc/nginx/ssl/rhelai.redhat.com.crt; ssl_certificate_key /etc/nginx/ssl/rhelai.redhat.com.key; location / { proxy_pass http://127.0.0.1:8000; proxy_set_header Host USDhost; proxy_set_header X-Real-IP USDremote_addr; proxy_set_header X-Forwarded-For USDproxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto USDscheme; } } ' > `pwd`/nginx/conf.d/rhelai.redhat.com.conf 1 Specify the name of your server. In the example, the server name is rhelai.redhat.com Run the Nginx container with the new configurations by running the following command: USD podman run --net host -v `pwd`/nginx/conf.d:/etc/nginx/conf.d:ro,Z -v `pwd`/nginx/ssl:/etc/nginx/ssl:ro,Z nginx If you want to use port 443, you must run the podman run command as a root user.. You can now connect to a serving ilab machine using a secure endpoint URL. Example command: USD ilab model chat -m /instructlab/instructlab/granite-7b-redhat-lab --endpoint-url You can also connect to the serving RHEL AI machine with the following command: USD curl --location 'https://rhelai.redhat.com:8443/v1' \ --header 'Content-Type: application/json' \ --header 'Authorization: Bearer <api-key>' \ --data '{ "model": "/var/home/cloud-user/.cache/instructlab/models/granite-7b-redhat-lab", "messages": [ { "role": "system", "content": "You are a helpful assistant." }, { "role": "user", "content": "Hello!" } ] }' | jq . where <api-key> Specify your API key. You can create your own API key by following the procedure in "Creating an API key for chatting with a model". Optional: You can also get the server certificate and append it to the Certifi CA Bundle Get the server certificate by running the following command: USD openssl s_client -connect rhelai.redhat.com:8443 </dev/null 2>/dev/null | openssl x509 -outform PEM > server.crt Copy the certificate to you system's trusted CA storage directory and update the CA trust store with the following commands: USD sudo cp server.crt /etc/pki/ca-trust/source/anchors/ USD sudo update-ca-trust You can append your certificate to the Certifi CA bundle by running the following command: USD cat server.crt >> USD(python -m certifi) You can now run ilab model chat with a self-signed certificate. Example command: USD ilab model chat -m /instructlab/instructlab/granite-7b-redhat-lab --endpoint-url https://rhelai.redhat.com:8443/v1 4.2. Chatting with the model Once you serve your model, you can now chat with the model. Important The model you are chatting with must match the model you are serving. With the default config.yaml file, the granite-7b-redhat-lab model is the default for serving and chatting. Prerequisites You installed RHEL AI with the bootable container image. You initialized InstructLab. You downloaded your preferred Granite LLMs. You are serving a model. You have root user access on your machine. Procedure Since you are serving the model in one terminal window, you must open another terminal to chat with the model. To chat with the default model, run the following command: USD ilab model chat To chat with a specific model run the following command: USD ilab model chat --model <model-path> Example command USD ilab model chat --model ~/.cache/instructlab/models/granite-8b-code-instruct Example output of the chatbot USD ilab model chat ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────── system ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ │ Welcome to InstructLab Chat w/ GRANITE-8B-CODE-INSTRUCT (type /h for help) │ ╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ >>> [S][default] Type exit to leave the chatbot. 4.2.1. Optional: Creating an API key for chatting with a model By default, the ilab CLI does not use authentication. If you want to expose your server to the internet, you can create a API key that connects to your server with the following procedures. Prerequisites You installed the Red Hat Enterprise Linux AI image on bare metal. You initialized InstructLab You downloaded your preferred Granite LLMs. You have root user access on your machine. Procedure Create a API key that is held in USDVLLM_API_KEY parameter by running the following command: USD export VLLM_API_KEY=USD(python -c 'import secrets; print(secrets.token_urlsafe())') You can view the API key by running the following command: USD echo USDVLLM_API_KEY Update the config.yaml by running the following command: USD ilab config edit Add the following parameters to the vllm_args section of your config.yaml file. serve: vllm: vllm_args: - --api-key - <api-key-string> where <api-key-string> Specify your API key string. You can verify that the server is using API key authentication by running the following command: USD ilab model chat Then, seeing the following error that shows an unauthorized user. openai.AuthenticationError: Error code: 401 - {'error': 'Unauthorized'} Verify that your API key is working by running the following command: USD ilab model chat -m granite-7b-redhat-lab --endpoint-url https://inference.rhelai.com/v1 --api-key USDVLLM_API_KEY Example output USD ilab model chat ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────── system ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ │ Welcome to InstructLab Chat w/ GRANITE-7B-LAB (type /h for help) │ ╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ >>> [S][default]
|
[
"ilab model serve",
"ilab model serve --model-path <model-path>",
"ilab model serve --model-path ~/.cache/instructlab/models/granite-8b-code-instruct",
"INFO 2024-03-02 02:21:11,352 lab.py:201 Using model 'models/granite-8b-code-instruct' with -1 gpu-layers and 4096 max context size. Starting server process After application startup complete see http://127.0.0.1:8000/docs for API. Press CTRL+C to shut down the server.",
"mkdir -p USDHOME/.config/systemd/user",
"cat << EOF > USDHOME/.config/systemd/user/ilab-serve.service [Unit] Description=ilab model serve service [Install] WantedBy=multi-user.target default.target 1 [Service] ExecStart=ilab model serve --model-family granite Restart=always EOF",
"systemctl --user daemon-reload",
"systemctl --user start ilab-serve.service",
"systemctl --user status ilab-serve.service",
"journalctl --user-unit ilab-serve.service",
"sudo loginctl enable-linger",
"systemctl --user stop ilab-serve.service",
"mkdir -p `pwd`/nginx/ssl/",
"cat > openssl.cnf <<EOL [ req ] default_bits = 2048 distinguished_name = <req-distinguished-name> 1 x509_extensions = v3_req prompt = no [ req_distinguished_name ] C = US ST = California L = San Francisco O = My Company OU = My Division CN = rhelai.redhat.com [ v3_req ] subjectAltName = <alt-names> 2 basicConstraints = critical, CA:true subjectKeyIdentifier = hash authorityKeyIdentifier = keyid:always,issuer [ alt_names ] DNS.1 = rhelai.redhat.com 3 DNS.2 = www.rhelai.redhat.com 4",
"openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout `pwd`/nginx/ssl/rhelai.redhat.com.key -out `pwd`/nginx/ssl/rhelai.redhat.com.crt -config openssl.cnf",
"openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout",
"mkdir -p `pwd`/nginx/conf.d echo 'server { listen 8443 ssl; server_name <rhelai.redhat.com> 1 ssl_certificate /etc/nginx/ssl/rhelai.redhat.com.crt; ssl_certificate_key /etc/nginx/ssl/rhelai.redhat.com.key; location / { proxy_pass http://127.0.0.1:8000; proxy_set_header Host USDhost; proxy_set_header X-Real-IP USDremote_addr; proxy_set_header X-Forwarded-For USDproxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto USDscheme; } } ' > `pwd`/nginx/conf.d/rhelai.redhat.com.conf",
"podman run --net host -v `pwd`/nginx/conf.d:/etc/nginx/conf.d:ro,Z -v `pwd`/nginx/ssl:/etc/nginx/ssl:ro,Z nginx",
"ilab model chat -m /instructlab/instructlab/granite-7b-redhat-lab --endpoint-url",
"curl --location 'https://rhelai.redhat.com:8443/v1' --header 'Content-Type: application/json' --header 'Authorization: Bearer <api-key>' --data '{ \"model\": \"/var/home/cloud-user/.cache/instructlab/models/granite-7b-redhat-lab\", \"messages\": [ { \"role\": \"system\", \"content\": \"You are a helpful assistant.\" }, { \"role\": \"user\", \"content\": \"Hello!\" } ] }' | jq .",
"openssl s_client -connect rhelai.redhat.com:8443 </dev/null 2>/dev/null | openssl x509 -outform PEM > server.crt",
"sudo cp server.crt /etc/pki/ca-trust/source/anchors/",
"sudo update-ca-trust",
"cat server.crt >> USD(python -m certifi)",
"ilab model chat -m /instructlab/instructlab/granite-7b-redhat-lab --endpoint-url https://rhelai.redhat.com:8443/v1",
"ilab model chat",
"ilab model chat --model <model-path>",
"ilab model chat --model ~/.cache/instructlab/models/granite-8b-code-instruct",
"ilab model chat ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────── system ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ │ Welcome to InstructLab Chat w/ GRANITE-8B-CODE-INSTRUCT (type /h for help) │ ╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ >>> [S][default]",
"export VLLM_API_KEY=USD(python -c 'import secrets; print(secrets.token_urlsafe())')",
"echo USDVLLM_API_KEY",
"ilab config edit",
"serve: vllm: vllm_args: - --api-key - <api-key-string>",
"ilab model chat",
"openai.AuthenticationError: Error code: 401 - {'error': 'Unauthorized'}",
"ilab model chat -m granite-7b-redhat-lab --endpoint-url https://inference.rhelai.com/v1 --api-key USDVLLM_API_KEY",
"ilab model chat ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────── system ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ │ Welcome to InstructLab Chat w/ GRANITE-7B-LAB (type /h for help) │ ╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ >>> [S][default]"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.2/html/building_your_rhel_ai_environment/serving_and_chatting
|
Upgrade Red Hat Quay
|
Upgrade Red Hat Quay Red Hat Quay 3 Upgrade Red Hat Quay Red Hat OpenShift Documentation Team
|
[
"spec: components: - kind: clair managed: true - kind: clairpostgres managed: true overrides: volumeSize: <volume_size>",
"oc edit quayecosystem <quayecosystemname>",
"metadata: labels: quay-operator/migrate: \"true\"",
"kubectl delete -n <namespace> quayregistry <quayecosystem-name>",
"sudo podman stop <quay_container_name>",
"sudo podman stop <clair_container_id>",
"sudo podman run -d --name <clair_migration_postgresql_database> 1 -e POSTGRESQL_MIGRATION_REMOTE_HOST=<container_ip_address> \\ 2 -e POSTGRESQL_MIGRATION_ADMIN_PASSWORD=remoteAdminP@ssword -v </host/data/directory:/var/lib/pgsql/data:Z> \\ 3 [ OPTIONAL_CONFIGURATION_VARIABLES ] registry.redhat.io/rhel8/postgresql-15",
"mkdir -p /host/data/clair-postgresql15-directory",
"setfacl -m u:26:-wx /host/data/clair-postgresql15-directory",
"sudo podman stop <clair_postgresql13_container_name>",
"sudo podman run -d --rm --name <postgresql15-clairv4> -e POSTGRESQL_USER=<clair_username> -e POSTGRESQL_PASSWORD=<clair_password> -e POSTGRESQL_DATABASE=<clair_database_name> -e POSTGRESQL_ADMIN_PASSWORD=<admin_password> -p 5433:5432 -v </host/data/clair-postgresql15-directory:/var/lib/postgresql/data:Z> registry.redhat.io/rhel8/postgresql-15",
"sudo podman run -d --rm -p 80:8080 -p 443:8443 --name=quay -v /home/<quay_user>/quay-poc/config:/conf/stack:Z -v /home/<quay_user>/quay-poc/storage:/datastorage:Z {productrepo}/{quayimage}:{productminv}",
"sudo podman run -d --name clairv4 -p 8081:8081 -p 8088:8088 -e CLAIR_CONF=/clair/config.yaml -e CLAIR_MODE=combo registry.redhat.io/quay/clair-rhel8:{productminv}",
"sudo podman ps",
"CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ec16ece208c0 registry.redhat.io/quay/quay-rhel8:v{producty-n1} registry 6 minutes ago Up 6 minutes ago 0.0.0.0:80->8080/tcp, 0.0.0.0:443->8443/tcp quay01",
"sudo podman ps",
"CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 7ae0c9a8b37d registry.redhat.io/quay/quay-rhel8:v{producty-n1} registry 5 minutes ago Up 2 seconds ago 0.0.0.0:82->8080/tcp, 0.0.0.0:445->8443/tcp quay02",
"sudo podman ps",
"CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES e75c4aebfee9 registry.redhat.io/quay/quay-rhel8:v{producty-n1} registry 4 seconds ago Up 4 seconds ago 0.0.0.0:84->8080/tcp, 0.0.0.0:447->8443/tcp quay03",
"sudo podman stop ec16ece208c0",
"sudo podman stop 7ae0c9a8b37d",
"sudo podman stop e75c4aebfee9",
"sudo podman pull registry.redhat.io/quay/quay-rhel8:{productminv}",
"sudo podman pull registry.redhat.io/quay/quay-rhel8:v{producty}",
"sudo podman pull registry.redhat.io/quay/quay-rhel8:{productminv}",
"sudo podman run --restart=always -p 443:8443 -p 80:8080 --sysctl net.core.somaxconn=4096 --name=quay01 -v /mnt/quay/config:/conf/stack:Z -v /mnt/quay/storage:/datastorage:Z -d registry.redhat.io/quay/quay-rhel8:{productminv}",
"sudo podman ps",
"CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 70b9f38c3fb4 registry.redhat.io/quay/quay-rhel8:v{producty} registry 2 seconds ago Up 2 seconds ago 0.0.0.0:82->8080/tcp, 0.0.0.0:445->8443/tcp quay01",
"sudo podman run --restart=always -p 443:8443 -p 80:8080 --sysctl net.core.somaxconn=4096 --name=quay02 -v /mnt/quay/config:/conf/stack:Z -v /mnt/quay/storage:/datastorage:Z -d registry.redhat.io/quay/quay-rhel8:{productminv}",
"sudo podman run --restart=always -p 443:8443 -p 80:8080 --sysctl net.core.somaxconn=4096 --name=quay03 -v /mnt/quay/config:/conf/stack:Z -v /mnt/quay/storage:/datastorage:Z -d registry.redhat.io/quay/quay-rhel8:{productminv}",
"sudo podman ps",
"apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: registry namespace: ns spec: components: ... - kind: horizontalpodautoscaler managed: false 1 - kind: quay managed: true overrides: 2 replicas: 0 - kind: clair managed: true overrides: replicas: 0 - kind: mirror managed: true overrides: replicas: 0 ...",
"get pods -n <quay-namespace>",
"quay-operator.v3.7.1-6f9d859bd-p5ftc 1/1 Running 0 12m quayregistry-clair-postgres-7487f5bd86-xnxpr 1/1 Running 1 (12m ago) 12m quayregistry-quay-app-upgrade-xq2v6 0/1 Completed 0 12m quayregistry-quay-redis-84f888776f-hhgms 1/1 Running 0 12m",
"apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: registry namespace: ns spec: components: ... - kind: horizontalpodautoscaler managed: true 1 - kind: quay managed: true - kind: clair managed: true - kind: mirror managed: true ...",
"- apiVersion: quay.redhat.com/v1 kind: QuayIntegration metadata: name: example-quayintegration-new spec: clusterID: openshift 1 credentialsSecret: name: quay-integration namespace: openshift-operators insecureRegistry: false quayHostname: https://registry-quay-quay35.router-default.apps.cluster.openshift.com",
"oc create -f upgrade-quay-integration.yaml",
"oc delete mutatingwebhookconfigurations.admissionregistration.k8s.io quay-bridge-operator"
] |
https://docs.redhat.com/en/documentation/red_hat_quay/3/html-single/upgrade_red_hat_quay/index
|
9.2. National Industrial Security Program Operating Manual (NISPOM)
|
9.2. National Industrial Security Program Operating Manual (NISPOM) The NISPOM (also called DoD 5220.22-M), as a component of the National Industrial Security Program (NISP), establishes a series of procedures and requirements for all government contractors with regard to classified information. The current NISPOM is dated February 28, 2006, with incorporated major changes from March 28, 2013. The NISPOM document can be downloaded from the following URL: http://www.nispom.org/NISPOM-download.html .
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/security_guide/sec-national_industrial_security_program_operating_manual
|
25.5. Working with Queues in Rsyslog
|
25.5. Working with Queues in Rsyslog Queues are used to pass content, mostly syslog messages, between components of rsyslog . With queues, rsyslog is capable of processing multiple messages simultaneously and to apply several actions to a single message at once. The data flow inside rsyslog can be illustrated as follows: Figure 25.1. Message Flow in Rsyslog Whenever rsyslog receives a message, it passes this message to the preprocessor and then places it into the main message queue . Messages wait there to be dequeued and passed to the rule processor . The rule processor is a parsing and filtering engine. Here, the rules defined in /etc/rsyslog.conf are applied. Based on these rules, the rule processor evaluates which actions are to be performed. Each action has its own action queue. Messages are passed through this queue to the respective action processor which creates the final output. Note that at this point, several actions can run simultaneously on one message. For this purpose, a message is duplicated and passed to multiple action processors. Only one queue per action is possible. Depending on configuration, the messages can be sent right to the action processor without action queuing. This is the behavior of direct queues (see below). In case the output action fails, the action processor notifies the action queue, which then takes an unprocessed element back and after some time interval, the action is attempted again. To sum up, there are two positions where queues stand in rsyslog : either in front of the rule processor as a single main message queue or in front of various types of output actions as action queues . Queues provide two main advantages that both lead to increased performance of message processing: they serve as buffers that decouple producers and consumers in the structure of rsyslog they allow for parallelization of actions performed on messages Apart from this, queues can be configured with several directives to provide optimal performance for your system. These configuration options are covered in the following sections. Warning If an output plug-in is unable to deliver a message, it is stored in the preceding message queue. If the queue fills, the inputs block until it is no longer full. This will prevent new messages from being logged via the blocked queue. In the absence of separate action queues this can have severe consequences, such as preventing SSH logging, which in turn can prevent SSH access. Therefore it is advised to use dedicated action queues for outputs which are forwarded over a network or to a database. 25.5.1. Defining Queues Based on where the messages are stored, there are several types of queues: direct , in-memory , disk , and disk-assisted in-memory queues that are most widely used. You can choose one of these types for the main message queue and also for action queues. Add the following into /etc/rsyslog.conf : USD object QueueType queue_type Here, you can apply the setting for the main message queue (replace object with MainMsg ) or for an action queue (replace object with Action ). Replace queue_type with one of direct , linkedlist or fixedarray (which are in-memory queues), or disk . The default setting for a main message queue is the FixedArray queue with a limit of 10,000 messages. Action queues are by default set as Direct queues. Direct Queues For many simple operations, such as when writing output to a local file, building a queue in front of an action is not needed. To avoid queuing, use: USD object QueueType Direct Replace object with MainMsg or with Action to use this option to the main message queue or for an action queue respectively. With direct queue, messages are passed directly and immediately from the producer to the consumer. Disk Queues Disk queues store messages strictly on a hard drive, which makes them highly reliable but also the slowest of all possible queuing modes. This mode can be used to prevent the loss of highly important log data. However, disk queues are not recommended in most use cases. To set a disk queue, type the following into /etc/rsyslog.conf : USD object QueueType Disk Replace object with MainMsg or with Action to use this option to the main message queue or for an action queue respectively. Disk queues are written in parts, with a default size 10 Mb. This default size can be modified with the following configuration directive: USD object QueueMaxFileSize size where size represents the specified size of disk queue part. The defined size limit is not restrictive, rsyslog always writes one complete queue entry, even if it violates the size limit. Each part of a disk queue matches with an individual file. The naming directive for these files looks as follows: USD object QueueFilename name This sets a name prefix for the file followed by a 7-digit number starting at one and incremented for each file. In-memory Queues With in-memory queue, the enqueued messages are held in memory which makes the process very fast. The queued data is lost if the computer is power cycled or shut down. However, you can use the USDActionQueueSaveOnShutdown setting to save the data before shutdown. There are two types of in-memory queues: FixedArray queue - the default mode for the main message queue, with a limit of 10,000 elements. This type of queue uses a fixed, pre-allocated array that holds pointers to queue elements. Due to these pointers, even if the queue is empty a certain amount of memory is consumed. However, FixedArray offers the best run time performance and is optimal when you expect a relatively low number of queued messages and high performance. LinkedList queue - here, all structures are dynamically allocated in a linked list, thus the memory is allocated only when needed. LinkedList queues handle occasional message bursts very well. In general, use LinkedList queues when in doubt. Compared to FixedArray, it consumes less memory and lowers the processing overhead. Use the following syntax to configure in-memory queues: USD object QueueType LinkedList USD object QueueType FixedArray Replace object with MainMsg or with Action to use this option to the main message queue or for an action queue respectively. Disk-Assisted In-memory Queues Both disk and in-memory queues have their advantages and rsyslog lets you combine them in disk-assisted in-memory queues . To do so, configure a normal in-memory queue and then add the USDobjectQueueFileName directive to define a file name for disk assistance. This queue then becomes disk-assisted , which means it couples an in-memory queue with a disk queue to work in tandem. The disk queue is activated if the in-memory queue is full or needs to persist after shutdown. With a disk-assisted queue, you can set both disk-specific and in-memory specific configuration parameters. This type of queue is probably the most commonly used, it is especially useful for potentially long-running and unreliable actions. To specify the functioning of a disk-assisted in-memory queue, use the so-called watermarks : USD object QueueHighWatermark number USD object QueueLowWatermark number Replace object with MainMsg or with Action to use this option to the main message queue or for an action queue respectively. Replace number with a number of enqueued messages. When an in-memory queue reaches the number defined by the high watermark, it starts writing messages to disk and continues until the in-memory queue size drops to the number defined with the low watermark. Correctly set watermarks minimize unnecessary disk writes, but also leave memory space for message bursts since writing to disk files is rather lengthy. Therefore, the high watermark must be lower than the whole queue capacity set with USDobjectQueueSize . The difference between the high watermark and the overall queue size is a spare memory buffer reserved for message bursts. On the other hand, setting the high watermark too low will turn on disk assistance unnecessarily often. Example 25.12. Reliable Forwarding of Log Messages to a Server Rsyslog is often used to maintain a centralized logging system, where log messages are forwarded to a server over the network. To avoid message loss when the server is not available, it is advisable to configure an action queue for the forwarding action. This way, messages that failed to be sent are stored locally until the server is reachable again. Note that such queues are not configurable for connections using the UDP protocol. To establish a fully reliable connection, for example when your logging server is outside of your private network, consider using the RELP protocol described in Section 25.7.4, "Using RELP" . Procedure 25.2. Forwarding To a Single Server Suppose the task is to forward log messages from the system to a server with host name example.com , and to configure an action queue to buffer the messages in case of a server outage. To do so, perform the following steps: Use the following configuration in /etc/rsyslog.conf or create a file with the following content in the /etc/rsyslog.d/ directory: Where: USDActionQueueType enables a LinkedList in-memory queue, USDActionFileName defines a disk storage, in this case the backup files are created in the /var/lib/rsyslog/ directory with the example_fwd prefix, the USDActionResumeRetryCount -1 setting prevents rsyslog from dropping messages when retrying to connect if server is not responding, enabled USDActionQueueSaveOnShutdown saves in-memory data if rsyslog shuts down, the last line forwards all received messages to the logging server, port specification is optional. With the above configuration, rsyslog keeps messages in memory if the remote server is not reachable. A file on disk is created only if rsyslog runs out of the configured memory queue space or needs to shut down, which benefits the system performance. Procedure 25.3. Forwarding To Multiple Servers The process of forwarding log messages to multiple servers is similar to the procedure: Each destination server requires a separate forwarding rule, action queue specification, and backup file on disk. For example, use the following configuration in /etc/rsyslog.conf or create a file with the following content in the /etc/rsyslog.d/ directory:
|
[
"USDActionQueueType LinkedList USDActionQueueFileName example_fwd USDActionResumeRetryCount -1 USDActionQueueSaveOnShutdown on *.* @@example.com:6514",
"USDActionQueueType LinkedList USDActionQueueFileName example_fwd1 USDActionResumeRetryCount -1 USDActionQueueSaveOnShutdown on *.* @@example1.com USDActionQueueType LinkedList USDActionQueueFileName example_fwd2 USDActionResumeRetryCount -1 USDActionQueueSaveOnShutdown on *.* @@example2.com"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s1-working_with_queues_in_rsyslog
|
Replacing nodes
|
Replacing nodes Red Hat OpenShift Data Foundation 4.14 Instructions for how to safely replace a node in an OpenShift Data Foundation cluster. Red Hat Storage Documentation Team Abstract This document explains how to safely replace a node in a Red Hat OpenShift Data Foundation cluster.
| null |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/replacing_nodes/index
|
Machine management
|
Machine management OpenShift Container Platform 4.17 Adding and maintaining cluster machines Red Hat OpenShift Documentation Team
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/machine_management/index
|
Use Red Hat Quay
|
Use Red Hat Quay Red Hat Quay 3.13 Use Red Hat Quay Red Hat OpenShift Documentation Team
| null |
https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/use_red_hat_quay/index
|
Chapter 6. Using Multi-Level Security (MLS)
|
Chapter 6. Using Multi-Level Security (MLS) The Multi-Level Security (MLS) policy uses levels of clearance as originally designed by the US defense community. MLS meets a very narrow set of security requirements based on information management in rigidly controlled environments such as the military. Using MLS is complex and does not map well to general use-case scenarios. 6.1. Multi-Level Security (MLS) The Multi-Level Security (MLS) technology classifies data in a hierarchical classification using information security levels, for example: [lowest] Unclassified [low] Confidential [high] Secret [highest] Top secret By default, the MLS SELinux policy uses 16 sensitivity levels: s0 is the least sensitive. s15 is the most sensitive. MLS uses specific terminology to address sensitivity levels: Users and processes are called subjects , whose sensitivity level is called clearance . Files, devices, and other passive components of the system are called objects , whose sensitivity level is called classification . To implement MLS, SELinux uses the Bell-La Padula Model (BLP) model. This model specifies how information can flow within the system based on labels attached to each subject and object. The basic principle of BLP is " No read up, no write down. " This means that users can only read files at their own sensitivity level and lower, and data can flow only from lower levels to higher levels, and never the reverse. The MLS SELinux policy, which is the implementation of MLS on RHEL, applies a modified principle called Bell-La Padula with write equality . This means that users can read files at their own sensitivity level and lower, but can write only at exactly their own level. This prevents, for example, low-clearance users from writing content into top-secret files. For example, by default, a user with clearance level s2 : Can read files with sensitivity levels s0 , s1 , and s2 . Cannot read files with sensitivity level s3 and higher. Can modify files with sensitivity level of exactly s2 . Cannot modify files with sensitivity level other than s2 . Note Security administrators can adjust this behavior by modifying the system's SELinux policy. For example, they can allow users to modify files at lower levels, which increases the file's sensitivity level to the user's clearance level. In practice, users are typically assigned to a range of clearance levels, for example s1-s2 . A user can read files with sensitivity levels lower than the user's maximum level, and write to any files within that range. For example, by default, a user with a clearance range s1-s2 : Can read files with sensitivity levels s0 and s1 . Cannot read files with sensitivity level s2 and higher. Can modify files with sensitivity level s1 . Cannot modify files with sensitivity level other than s1 . Can change own clearance level to s2 . The security context for a non-privileged user in an MLS environment is, for example: Where: user_u Is the SELinux user. user_r Is the SELinux role. user_t Is the SELinux type. s1 Is the range of MLS sensitivity levels. The system always combines MLS access rules with conventional file access permissions. For example, if a user with a security level of "Secret" uses Discretionary Access Control (DAC) to block access to a file by other users, even "Top Secret" users cannot access that file. A high security clearance does not automatically permit a user to browse the entire file system. Users with top-level clearances do not automatically acquire administrative rights on multi-level systems. While they might have access to all sensitive information about the system, this is different from having administrative rights. In addition, administrative rights do not provide access to sensitive information. For example, even when someone logs in as root , they still cannot read top-secret information. You can further adjust access within an MLS system by using categories. With Multi-Category Security (MCS), you can define categories such as projects or departments, and users will only be allowed to access files in the categories to which they are assigned. For additional information, see Using Multi-Category Security (MCS) for data confidentiality . 6.2. SELinux roles in MLS The SELinux policy maps each Linux user to an SELinux user. This allows Linux users to inherit the restrictions of SELinux users. Important The MLS policy does not contain the unconfined module, including unconfined users, types, and roles. As a result, users that would be unconfined, including root , cannot access every object and perform every action they could in the targeted policy. You can customize the permissions for confined users in your SELinux policy according to specific needs by adjusting the booleans in policy. You can determine the current state of these booleans by using the semanage boolean -l command. To list all SELinux users, their SELinux roles, and MLS/MCS levels and ranges, use the semanage user -l command as root . Table 6.1. Roles of SELinux users in MLS User Default role Additional roles guest_u guest_r xguest_u xguest_r user_u user_r staff_u staff_r auditadm_r secadm_r sysadm_r staff_r sysadm_u sysadm_r root staff_r auditadm_r secadm_r sysadm_r system_r system_u system_r Note that system_u is a special user identity for system processes and objects, and system_r is the associated role. Administrators must never associate this system_u user and the system_r role to a Linux user. Also, unconfined_u and root are unconfined users. For these reasons, the roles associated to these SELinux users are not included in the following table Types and access of SELinux roles. Each SELinux role corresponds to an SELinux type and provides specific access rights. Table 6.2. Types and access of SELinux roles in MLS Role Type Login using X Window System su and sudo Execute in home directory and /tmp (default) Networking guest_r guest_t no no yes no xguest_r xguest_t yes no yes web browsers only (Firefox, GNOME Web) user_r user_t yes no yes yes staff_r staff_t yes only sudo yes yes auditadm_r auditadm_t yes yes yes secadm_r secadm_t yes yes yes sysadm_r sysadm_t only when the xdm_sysadm_login boolean is on yes yes yes By default, the sysadm_r role has the rights of the secadm_r role, which means a user with the sysadm_r role can manage the security policy. If this does not correspond to your use case, you can separate the two roles by disabling the sysadm_secadm module in the policy. For additional information, see Separating system administration from security administration in MLS . Non-login roles dbadm_r , logadm_r , and webadm_r can be used for a subset of administrative tasks. By default, these roles are not associated with any SELinux user. 6.3. Switching the SELinux policy to MLS Use the following steps to switch the SELinux policy from targeted to Multi-Level Security (MLS). Important Do not use the MLS policy on a system that is running the X Window System. Furthermore, when you relabel the file system with MLS labels, the system may prevent confined domains from access, which prevents your system from starting correctly. Therefore ensure that you switch SELinux to permissive mode before you relabel the files. On most systems, you see a lot of SELinux denials after switching to MLS, and many of them are not trivial to fix. Procedure Install the selinux-policy-mls package: Open the /etc/selinux/config file in a text editor of your choice, for example: Change SELinux mode from enforcing to permissive and switch from the targeted policy to MLS: Save the changes, and quit the editor. Before you enable the MLS policy, you must relabel each file on the file system with an MLS label: Restart the system: Check for SELinux denials: Because the command does not cover all scenarios, see Troubleshooting problems related to SELinux for guidance on identifying, analyzing, and fixing SELinux denials. After you ensure that there are no problems related to SELinux on your system, switch SELinux back to enforcing mode by changing the corresponding option in /etc/selinux/config : Restart the system: Important If your system does not start or you are not able to log in after you switch to MLS, add the enforcing=0 parameter to your kernel command line. See Changing SELinux modes at boot time for more information. Also note that in MLS, SSH logins as the root user mapped to the sysadm_r SELinux role differ from logging in as root in staff_r . Before you start your system in MLS for the first time, consider allowing SSH logins as sysadm_r by setting the ssh_sysadm_login SELinux boolean to 1 . To enable ssh_sysadm_login later, already in MLS, you must log in as root in staff_r , switch to root in sysadm_r using the newrole -r sysadm_r command, and then set the boolean to 1 . Verification Verify that SELinux runs in enforcing mode: Check that the status of SELinux returns the mls value: Additional resources fixfiles(8) , setsebool(8) , and ssh_selinux(8) man pages on your system 6.4. Establishing user clearance in MLS After you switch SELinux policy to MLS, you must assign security clearance levels to users by mapping them to confined SELinux users. By default, a user with a given security clearance: Cannot read objects that have a higher sensitivity level. Cannot write to objects at a different sensitivity level. Prerequisites The SELinux policy is set to mls . The SELinux mode is set to enforcing . The policycoreutils-python-utils package is installed. A user assigned to an SELinux confined user: For a non-privileged user, assigned to user_u ( example_user in the following procedure). For a privileged user, assigned to staff_u ( staff in the following procedure) . Important Make sure that the users have been created when the MLS policy was active. Users created in other SELinux policies cannot be used in MLS. Procedure Optional: To prevent adding errors to your SELinux policy, switch to the permissive SELinux mode, which facilitates troubleshooting: Note that in permissive mode, SELinux does not enforce the active policy but only logs Access Vector Cache (AVC) messages, which can be then used for troubleshooting and debugging. Define a clearance range for the staff_u SELinux user. For example, this command sets the clearance range from s1 to s15 with s1 being the default clearance level: Generate SELinux file context configuration entries for user home directories: Restore file security contexts to default: # restorecon -R -F -v /home/ Relabeled /home/staff from staff_u:object_r:user_home_dir_t:s0 to staff_u:object_r:user_home_dir_t:s1 Relabeled /home/staff/.bash_logout from staff_u:object_r:user_home_t:s0 to staff_u:object_r:user_home_t:s1 Relabeled /home/staff/.bash_profile from staff_u:object_r:user_home_t:s0 to staff_u:object_r:user_home_t:s1 Relabeled /home/staff/.bashrc from staff_u:object_r:user_home_t:s0 to staff_u:object_r:user_home_t:s1 Assign a clearance level to the user: Where s1 is the clearance level assigned to the user. Relabel the user's home directory to the user's clearance level: Optional: If you previously switched to the permissive SELinux mode, and after you verify that everything works as expected, switch back to the enforcing SELinux mode: Verification Verify that the user is mapped to the correct SELinux user and has the correct clearance level assigned: # semanage login -l Login Name SELinux User MLS/MCS Range Service __default__ user_u s0-s0 * example_user user_u s1 * ... Log in as the user within MLS. Verify that the user's security level works correctly: Warning The files you use for verification should not contain any sensitive information in case the configuration is incorrect and the user actually can access the files without authorization. Verify that the user cannot read a file with a higher-level sensitivity. Verify that the user can write to a file with the same sensitivity. Verify that the user can read a file with a lower-level sensitivity. Additional resources Switching the SELinux policy to MLS Adding a new user as an SELinux-confined user Permanent changes in SELinux states and modes Troubleshooting problems related to SELinux Basic SELinux Troubleshooting in CLI Knowledgebase article 6.5. Changing a user's clearance level within the defined security range in MLS As a user in Multi-Level Security (MLS), you can change your current clearance level within the range the administrator assigned to you. You can never exceed the upper limit of your range or reduce your level below the lower limit of your range. This allows you, for example, to modify lower-sensitivity files without increasing their sensitivity level to your highest clearance level. For example, as a user assigned to range s1-s3 : You can switch to levels s1 , s2 , and s3 . You can switch to ranges s1-s2 , and s2-s3 . You cannot switch to ranges s0-s3 or s1-s4 . Switching to a different level opens a new shell with the different clearance. This means you cannot return to your original clearance level in the same way as decreasing it. However, you can always return to the shell by entering exit . Prerequisites The SELinux policy is set to mls . SELinux mode is set to enforcing . You can log in as a user assigned to a range of MLS clearance levels. Procedure Log in as the user from a secure terminal. Secure terminals are defined in the /etc/selinux/mls/contexts/securetty_types file. By default, the console is a secure terminal, but SSH is not. Check the current user's security context: In this example, the user is assigned to the user_u SELinux user, user_r role, user_t type, and the MLS security range s0-s2 . Check the current user's security context: Switch to a different security clearance range within the user's clearance range: You can switch to any range whose maximum is lower or equal to your assigned range. Entering a single-level range changes the lower limit of the assigned range. For example, entering newrole -l s1 as a user with a s0-s2 range is equivalent to entering newrole -l s1-s2 . Verification Display the current user's security context: Return to the shell with the original range by terminating the current shell: Additional resources Establishing user clearance in MLS newrole(1) and securetty_types(5) man pages on your system 6.6. Increasing file sensitivity levels in MLS By default, Multi-Level Security (MLS) users cannot increase file sensitivity levels. However, the security administrator ( secadm_r ) can change this default behavior to allow users to increase the sensitivity of files by adding the local module mlsfilewrite to the system's SELinux policy. Then, users assigned to the SELinux type defined in the policy module can increase file classification levels by modifying the file. Any time a user modifies a file, the file's sensitivity level increases to the lower value of the user's current security range. The security administrator, when logged in as a user assigned to the secadm_r role, can change the security levels of files by using the chcon -l s0 /path/to/file command. For more information, see Changing file sensitivity in MLS . Prerequisites The SELinux policy is set to mls . SELinux mode is set to enforcing . The policycoreutils-python-utils package is installed. The mlsfilewrite local module is installed in the SELinux MLS policy. You are logged in as a user in MLS which is: Assigned to a defined security range. This example shows a user with a security range s0-s2 . Assigned to the same SELinux type defined in the mlsfilewrite module. This example requires the (typeattributeset mlsfilewrite (user_t)) module. Procedure Optional: Display the security context of the current user: Change the lower level of the user's MLS clearance range to the level which you want to assign to the file: Optional: Display the security context of the current user: Optional: Display the security context of the file: Change the file's sensitivity level to the lower level of the user's clearance range by modifying the file: Important The classification level reverts to the default value if the restorecon command is used on the system. Optional: Exit the shell to return to the user's security range: Verification Display the security context of the file: Additional resources Allowing MLS users to edit files on lower levels . 6.7. Changing file sensitivity in MLS In the MLS SELinux policy, users can only modify files at their own sensitivity level. This is intended to prevent any highly sensitive information to be exposed to users at lower clearance levels, and also prevent low-clearance users creating high-sensitivity documents. Administrators, however, can manually increase a file's classification, for example for the file to be processed at the higher level. Prerequisites SELinux policy is set to mls . SELinux mode is set to enforcing. You have security administration rights, which means that you are assigned to either: The secadm_r role. If the sysadm_secadm module is enabled, to the sysadm_r role. The sysadm_secadm module is enabled by default. The policycoreutils-python-utils package is installed. A user assigned to any clearance level. For additional information, see Establishing user clearance levels in MLS . In this example, User1 has clearance level s1 . A file with a classification level assigned and to which you have access. In this example, /path/to/file has classification level s1 . Procedure Check the file's classification level: # ls -lZ /path/to/file -rw-r-----. 1 User1 User1 user_u:object_r:user_home_t: s1 0 12. Feb 10:43 /path/to/file Change the file's default classification level: # semanage fcontext -a -r s2 /path/to/file Force the relabeling of the file's SELinux context: Verification Check the file's classification level: # ls -lZ /path/to/file -rw-r-----. 1 User1 User1 user_u:object_r:user_home_t: s2 0 12. Feb 10:53 /path/to/file Optional: Verify that the lower-clearance user cannot read the file: Additional resources Establishing user clearance levels in MLS . 6.8. Separating system administration from security administration in MLS By default, the sysadm_r role has the rights of the secadm_r role, which means a user with the sysadm_r role can manage the security policy. If you need more control over security authorizations, you can separate system administration from security administration by assigning a Linux user to the secadm_r role and disabling the sysadm_secadm module in the SELinux policy. Prerequisites The SELinux policy is set to mls . The SELinux mode is set to enforcing . The policycoreutils-python-utils package is installed. A Linux user which will be assigned to the secadm_r role: The user is assigned to the staff_u SELinux user A password for this user has been defined. Warning Make sure you can log in as the user which will be assigned to the secadm role. If not, you can prevent any future modifications of the system's SELinux policy. Procedure Create a new sudoers file in the /etc/sudoers.d directory for the user: To keep the sudoers files organized, replace <sec_adm_user> with the Linux user which will be assigned to the secadm role. Add the following content into the /etc/sudoers.d/ <sec_adm_user> file: This line authorizes <secadmuser> on all hosts to perform all commands, and maps the user to the secadm SELinux type and role by default. Log in as the <sec_adm_user> user. To make sure that the SELinux context (which consists of SELinux user, role, and type) is changed, log in using ssh , the console, or xdm . Other ways, such as su and sudo , cannot change the entire SELinux context. Verify the user's security context: Run the interactive shell for the root user: Verify the current user's security context: Disable the sysadm_secadm module from the policy: Important Use the semodule -d command instead of removing the system policy module by using the semodule -r command. The semodule -r command deletes the module from your system's storage, which means it cannot be loaded again without reinstalling the selinux-policy-mls package. Verification As the user assigned to the secadm role, and in the interactive shell for the root user, verify that you can access the security policy data: Log out from the root shell: Log out from the <sec_adm_user> user: Display the current security context: Attempt to enable the sysadm_secadm module. The command should fail: # semodule -e sysadm_secadm SELinux: Could not load policy file /etc/selinux/mls/policy/policy.31: Permission denied /sbin/load_policy: Can't load policy: Permission denied libsemanage.semanage_reload_policy: load_policy returned error code 2. (No such file or directory). SELinux: Could not load policy file /etc/selinux/mls/policy/policy.31: Permission denied /sbin/load_policy: Can't load policy: Permission denied libsemanage.semanage_reload_policy: load_policy returned error code 2. (No such file or directory). semodule: Failed! Attempt to display the details about the sysadm_t SELinux type. The command should fail: 6.9. Defining a secure terminal in MLS The SELinux policy checks the type of the terminal from which a user is connected, and allows running of certain SELinux applications, for example newrole , only from secure terminals. Attempting this from a non-secure terminal produces an error: Error: you are not allowed to change levels on a non secure terminal; . The /etc/selinux/mls/contexts/securetty_types file defines secure terminals for the Multi-Level Security (MLS) policy. Default contents of the file: Warning Adding terminal types to the list of secure terminals can expose your system to security risks. Prerequisites SELinux policy is set to mls . You are connected from an already secure terminal, or SELinux is in permissive mode. You have security administration rights, which means that you are assigned to either: The secadm_r role. If the sysadm_secadm module is enabled, to the sysadm_r role. The sysadm_secadm module is enabled by default. The policycoreutils-python-utils package is installed. Procedure Determine the current terminal type: In this example output, user_devpts_t is the current terminal type. Add the relevant SELinux type on a new line in the /etc/selinux/mls/contexts/securetty_types file. Optional: Switch SELinux to enforcing mode: Verification Log in from the previously insecure terminal you have added to the /etc/selinux/mls/contexts/securetty_types file. Additional resources securetty_types(5) man page on your system 6.10. Allowing MLS users to edit files on lower levels By default, MLS users cannot write to files which have a sensitivity level below the lower value of the clearance range. If your scenario requires allowing users to edit files on lower levels, you can do so by creating a local SELinux module. However, writing to a file will increase its sensitivity level to the lower value of the user's current range. Prerequisites The SELinux policy is set to mls . The SELinux mode is set to enforcing . The policycoreutils-python-utils package is installed. The setools-console and audit packages for verification. Procedure Optional: Switch to permissive mode for easier troubleshooting. Open a new .cil file with a text editor, for example ~/local_mlsfilewrite.cil , and insert the following custom rule: You can replace staff_t with a different SELinux type. By specifying SELinux type here, you can control which SELinux roles can edit lower-level files. To keep your local modules better organized, use the local_ prefix in the names of local SELinux policy modules. Install the policy module: Note To remove the local policy module, use semodule -r ~/ local_mlsfilewrite . Note that you must refer to the module name without the .cil suffix. Optional: If you previously switched back to permissive mode, return to enforcing mode: Verification Find the local module in the list of installed SELinux modules: Because local modules have priority 400 , you can list them also by using the semodule -lfull | grep -v ^100 command. Log in as a user assigned to the type defined in the custom rule, for example, staff_t . Attempt to write to a file with a lower sensitivity level. This increases the file's classification level to the user's clearance level. Important The files you use for verification should not contain any sensitive information in case the configuration is incorrect and the user actually can access the files without authorization.
|
[
"user_u:user_r:user_t:s1",
"dnf install selinux-policy-mls",
"vi /etc/selinux/config",
"SELINUX=permissive SELINUXTYPE=mls",
"fixfiles -F onboot System will relabel on next boot",
"reboot",
"ausearch -m AVC,USER_AVC,SELINUX_ERR,USER_SELINUX_ERR -ts recent -i",
"SELINUX=enforcing",
"reboot",
"getenforce Enforcing",
"sestatus | grep mls Loaded policy name: mls",
"setenforce 0",
"semanage user -m -L s1 -r s1-s15 staff_u",
"genhomedircon",
"restorecon -R -F -v /home/ Relabeled /home/staff from staff_u:object_r:user_home_dir_t:s0 to staff_u:object_r:user_home_dir_t:s1 Relabeled /home/staff/.bash_logout from staff_u:object_r:user_home_t:s0 to staff_u:object_r:user_home_t:s1 Relabeled /home/staff/.bash_profile from staff_u:object_r:user_home_t:s0 to staff_u:object_r:user_home_t:s1 Relabeled /home/staff/.bashrc from staff_u:object_r:user_home_t:s0 to staff_u:object_r:user_home_t:s1",
"semanage login -m -r s1 example_user",
"chcon -R -l s1 /home/ example_user",
"setenforce 1",
"semanage login -l Login Name SELinux User MLS/MCS Range Service __default__ user_u s0-s0 * example_user user_u s1 * ...",
"id -Z user_u:user_r:user_t: s0-s2",
"id -Z user_u:user_r:user_t: s1-s2",
"newrole -l s1",
"id -Z user_u:user_r:user_t: s1-s2",
"exit",
"id -Z user_u:user_r:user_t: s0-s2",
"newrole -l s1-s2",
"id -Z user_u:user_r:user_t: s1-s2",
"ls -Z /path/to/file user_u:object_r:user_home_t: s0 /path/to/file",
"touch /path/to/file",
"exit",
"ls -Z /path/to/file user_u:object_r:user_home_t: s1 /path/to/file",
"ls -lZ /path/to/file -rw-r-----. 1 User1 User1 user_u:object_r:user_home_t: s1 0 12. Feb 10:43 /path/to/file",
"semanage fcontext -a -r s2 /path/to/file",
"restorecon -F -v /path/to/file Relabeled /path/to/file from user_u:object_r:user_home_t: s1 to user_u:object_r:user_home_t:s2",
"ls -lZ /path/to/file -rw-r-----. 1 User1 User1 user_u:object_r:user_home_t: s2 0 12. Feb 10:53 /path/to/file",
"cat /path/to/file cat: file: Permission denied",
"visudo -f /etc/sudoers.d/ <sec_adm_user>",
"<sec_adm_user> ALL=(ALL) TYPE=secadm_t ROLE=secadm_r ALL",
"id uid=1000( <sec_adm_user> ) gid=1000( <sec_adm_user> ) groups=1000( <sec_adm_user> ) context=staff_u:staff_r:staff_t:s0-s15:c0.c1023",
"sudo -i [sudo] password for <sec_adm_user> :",
"id uid=0(root) gid=0(root) groups=0(root) context=staff_u:secadm_r:secadm_t:s0-s15:c0.c1023",
"semodule -d sysadm_secadm",
"seinfo -xt secadm_t Types: 1 type secadm_t, can_relabelto_shadow_passwords, (...) userdomain;",
"logout",
"logout Connection to localhost closed.",
"id uid=0(root) gid=0(root) groups=0(root) context=root:sysadm_r:sysadm_t:s0-s15:c0.c1023",
"semodule -e sysadm_secadm SELinux: Could not load policy file /etc/selinux/mls/policy/policy.31: Permission denied /sbin/load_policy: Can't load policy: Permission denied libsemanage.semanage_reload_policy: load_policy returned error code 2. (No such file or directory). SELinux: Could not load policy file /etc/selinux/mls/policy/policy.31: Permission denied /sbin/load_policy: Can't load policy: Permission denied libsemanage.semanage_reload_policy: load_policy returned error code 2. (No such file or directory). semodule: Failed!",
"seinfo -xt sysadm_t [Errno 13] Permission denied: '/sys/fs/selinux/policy'",
"console_device_t sysadm_tty_device_t user_tty_device_t staff_tty_device_t auditadm_tty_device_t secureadm_tty_device_t",
"ls -Z `tty` root:object_r: user_devpts_t :s0 /dev/pts/0",
"setenforce 1",
"setenforce 0",
"(typeattributeset mlsfilewrite (_staff_t_))",
"semodule -i ~/ local_mlsfilewrite .cil",
"setenforce 1",
"semodule -lfull | grep \"local_mls\" 400 local_mlsfilewrite cil"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/using_selinux/using-multi-level-security-mls_using-selinux
|
Preface
|
Preface Authentication within Red Hat Developer Hub facilitates user sign-in, identification, and access to external resources. It supports multiple authentication providers. Authentication providers are typically used in the following ways: One provider for sign-in and identification. Additional providers for accessing external resources. The Red Hat Developer Hub supports the following authentication providers: Microsoft Azure microsoft GitHub github Keycloak oidc For each provider that you want to use, follow the dedicated procedure to complete the following tasks: Set up the shared secret that the authentication provider and Red Hat Developer Hub require to communicate. Configure Red Hat Developer Hub to use the authentication provider.
| null |
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.2/html/authentication/pr01
|
Appendix C. CLI help
|
Appendix C. CLI help Satellite offers multiple user interfaces: Satellite web UI, Hammer CLI, API, and through Ansible collection redhat.satellite. If you want to administer Satellite on the command line, have a look at the following help output. Satellite services A set of services that Satellite Server and Capsule Servers use for operation. You can use the satellite-maintain tool to manage these services. To see the full list of services, enter the satellite-maintain service list command on the machine where Satellite or Capsule Server is installed. For more information, run satellite-maintain --help on your Satellite Server or Capsule Server. Satellite plugins You can extend Satellite by installing plugins. For more information, run satellite-installer --full-help on your Satellite Server or Capsule Server. Hammer CLI You can manage Satellite on the command line using hammer . For more information on using Hammer CLI, see Using the Hammer CLI tool or run hammer --help on your Satellite Server or Capsule Server.
| null |
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/overview_concepts_and_deployment_considerations/cli-help_planning
|
High Availability Guide
|
High Availability Guide Red Hat build of Keycloak 24.0 Red Hat Customer Content Services
|
[
"aws ec2 create-vpc --cidr-block 192.168.0.0/16 --tag-specifications \"ResourceType=vpc, Tags=[{Key=AuroraCluster,Value=keycloak-aurora}]\" \\ 1 --region eu-west-1",
"{ \"Vpc\": { \"CidrBlock\": \"192.168.0.0/16\", \"DhcpOptionsId\": \"dopt-0bae7798158bc344f\", \"State\": \"pending\", \"VpcId\": \"vpc-0b40bd7c59dbe4277\", \"OwnerId\": \"606671647913\", \"InstanceTenancy\": \"default\", \"Ipv6CidrBlockAssociationSet\": [], \"CidrBlockAssociationSet\": [ { \"AssociationId\": \"vpc-cidr-assoc-09a02a83059ba5ab6\", \"CidrBlock\": \"192.168.0.0/16\", \"CidrBlockState\": { \"State\": \"associated\" } } ], \"IsDefault\": false } }",
"aws ec2 create-subnet --availability-zone \"eu-west-1a\" --vpc-id vpc-0b40bd7c59dbe4277 --cidr-block 192.168.0.0/19 --region eu-west-1",
"{ \"Subnet\": { \"AvailabilityZone\": \"eu-west-1a\", \"AvailabilityZoneId\": \"euw1-az3\", \"AvailableIpAddressCount\": 8187, \"CidrBlock\": \"192.168.0.0/19\", \"DefaultForAz\": false, \"MapPublicIpOnLaunch\": false, \"State\": \"available\", \"SubnetId\": \"subnet-0d491a1a798aa878d\", \"VpcId\": \"vpc-0b40bd7c59dbe4277\", \"OwnerId\": \"606671647913\", \"AssignIpv6AddressOnCreation\": false, \"Ipv6CidrBlockAssociationSet\": [], \"SubnetArn\": \"arn:aws:ec2:eu-west-1:606671647913:subnet/subnet-0d491a1a798aa878d\", \"EnableDns64\": false, \"Ipv6Native\": false, \"PrivateDnsNameOptionsOnLaunch\": { \"HostnameType\": \"ip-name\", \"EnableResourceNameDnsARecord\": false, \"EnableResourceNameDnsAAAARecord\": false } } }",
"aws ec2 create-subnet --availability-zone \"eu-west-1b\" --vpc-id vpc-0b40bd7c59dbe4277 --cidr-block 192.168.32.0/19 --region eu-west-1",
"{ \"Subnet\": { \"AvailabilityZone\": \"eu-west-1b\", \"AvailabilityZoneId\": \"euw1-az1\", \"AvailableIpAddressCount\": 8187, \"CidrBlock\": \"192.168.32.0/19\", \"DefaultForAz\": false, \"MapPublicIpOnLaunch\": false, \"State\": \"available\", \"SubnetId\": \"subnet-057181b1e3728530e\", \"VpcId\": \"vpc-0b40bd7c59dbe4277\", \"OwnerId\": \"606671647913\", \"AssignIpv6AddressOnCreation\": false, \"Ipv6CidrBlockAssociationSet\": [], \"SubnetArn\": \"arn:aws:ec2:eu-west-1:606671647913:subnet/subnet-057181b1e3728530e\", \"EnableDns64\": false, \"Ipv6Native\": false, \"PrivateDnsNameOptionsOnLaunch\": { \"HostnameType\": \"ip-name\", \"EnableResourceNameDnsARecord\": false, \"EnableResourceNameDnsAAAARecord\": false } } }",
"aws ec2 describe-route-tables --filters Name=vpc-id,Values=vpc-0b40bd7c59dbe4277 --region eu-west-1",
"{ \"RouteTables\": [ { \"Associations\": [ { \"Main\": true, \"RouteTableAssociationId\": \"rtbassoc-02dfa06f4c7b4f99a\", \"RouteTableId\": \"rtb-04a644ad3cd7de351\", \"AssociationState\": { \"State\": \"associated\" } } ], \"PropagatingVgws\": [], \"RouteTableId\": \"rtb-04a644ad3cd7de351\", \"Routes\": [ { \"DestinationCidrBlock\": \"192.168.0.0/16\", \"GatewayId\": \"local\", \"Origin\": \"CreateRouteTable\", \"State\": \"active\" } ], \"Tags\": [], \"VpcId\": \"vpc-0b40bd7c59dbe4277\", \"OwnerId\": \"606671647913\" } ] }",
"aws ec2 associate-route-table --route-table-id rtb-04a644ad3cd7de351 --subnet-id subnet-0d491a1a798aa878d --region eu-west-1",
"aws ec2 associate-route-table --route-table-id rtb-04a644ad3cd7de351 --subnet-id subnet-057181b1e3728530e --region eu-west-1",
"aws rds create-db-subnet-group --db-subnet-group-name keycloak-aurora-subnet-group --db-subnet-group-description \"Aurora DB Subnet Group\" --subnet-ids subnet-0d491a1a798aa878d subnet-057181b1e3728530e --region eu-west-1",
"aws ec2 create-security-group --group-name keycloak-aurora-security-group --description \"Aurora DB Security Group\" --vpc-id vpc-0b40bd7c59dbe4277 --region eu-west-1",
"{ \"GroupId\": \"sg-0d746cc8ad8d2e63b\" }",
"aws rds create-db-cluster --db-cluster-identifier keycloak-aurora --database-name keycloak --engine aurora-postgresql --engine-version USD{properties[\"aurora-postgresql.version\"]} --master-username keycloak --master-user-password secret99 --vpc-security-group-ids sg-0d746cc8ad8d2e63b --db-subnet-group-name keycloak-aurora-subnet-group --region eu-west-1",
"{ \"DBCluster\": { \"AllocatedStorage\": 1, \"AvailabilityZones\": [ \"eu-west-1b\", \"eu-west-1c\", \"eu-west-1a\" ], \"BackupRetentionPeriod\": 1, \"DatabaseName\": \"keycloak\", \"DBClusterIdentifier\": \"keycloak-aurora\", \"DBClusterParameterGroup\": \"default.aurora-postgresql15\", \"DBSubnetGroup\": \"keycloak-aurora-subnet-group\", \"Status\": \"creating\", \"Endpoint\": \"keycloak-aurora.cluster-clhthfqe0h8p.eu-west-1.rds.amazonaws.com\", \"ReaderEndpoint\": \"keycloak-aurora.cluster-ro-clhthfqe0h8p.eu-west-1.rds.amazonaws.com\", \"MultiAZ\": false, \"Engine\": \"aurora-postgresql\", \"EngineVersion\": \"15.3\", \"Port\": 5432, \"MasterUsername\": \"keycloak\", \"PreferredBackupWindow\": \"02:21-02:51\", \"PreferredMaintenanceWindow\": \"fri:03:34-fri:04:04\", \"ReadReplicaIdentifiers\": [], \"DBClusterMembers\": [], \"VpcSecurityGroups\": [ { \"VpcSecurityGroupId\": \"sg-0d746cc8ad8d2e63b\", \"Status\": \"active\" } ], \"HostedZoneId\": \"Z29XKXDKYMONMX\", \"StorageEncrypted\": false, \"DbClusterResourceId\": \"cluster-IBWXUWQYM3MS5BH557ZJ6ZQU4I\", \"DBClusterArn\": \"arn:aws:rds:eu-west-1:606671647913:cluster:keycloak-aurora\", \"AssociatedRoles\": [], \"IAMDatabaseAuthenticationEnabled\": false, \"ClusterCreateTime\": \"2023-11-01T10:40:45.964000+00:00\", \"EngineMode\": \"provisioned\", \"DeletionProtection\": false, \"HttpEndpointEnabled\": false, \"CopyTagsToSnapshot\": false, \"CrossAccountClone\": false, \"DomainMemberships\": [], \"TagList\": [], \"AutoMinorVersionUpgrade\": true, \"NetworkType\": \"IPV4\" } }",
"aws rds create-db-instance --db-cluster-identifier keycloak-aurora --db-instance-identifier \"keycloak-aurora-instance-1\" --db-instance-class db.t4g.large --engine aurora-postgresql --region eu-west-1",
"aws rds create-db-instance --db-cluster-identifier keycloak-aurora --db-instance-identifier \"keycloak-aurora-instance-2\" --db-instance-class db.t4g.large --engine aurora-postgresql --region eu-west-1",
"aws rds wait db-instance-available --db-instance-identifier keycloak-aurora-instance-1 --region eu-west-1 aws rds wait db-instance-available --db-instance-identifier keycloak-aurora-instance-2 --region eu-west-1",
"aws rds describe-db-clusters --db-cluster-identifier keycloak-aurora --query 'DBClusters[*].Endpoint' --region eu-west-1 --output text",
"[ \"keycloak-aurora.cluster-clhthfqe0h8p.eu-west-1.rds.amazonaws.com\" ]",
"aws ec2 describe-vpcs --filters \"Name=tag:AuroraCluster,Values=keycloak-aurora\" --query 'Vpcs[*].VpcId' --region eu-west-1 --output text",
"vpc-0b40bd7c59dbe4277",
"NODE=USD(oc get nodes --selector=node-role.kubernetes.io/worker -o jsonpath='{.items[0].metadata.name}') aws ec2 describe-instances --filters \"Name=private-dns-name,Values=USD{NODE}\" --query 'Reservations[0].Instances[0].VpcId' --region eu-west-1 --output text",
"vpc-0b721449398429559",
"aws ec2 create-vpc-peering-connection --vpc-id vpc-0b721449398429559 \\ 1 --peer-vpc-id vpc-0b40bd7c59dbe4277 \\ 2 --peer-region eu-west-1 --region eu-west-1",
"{ \"VpcPeeringConnection\": { \"AccepterVpcInfo\": { \"OwnerId\": \"606671647913\", \"VpcId\": \"vpc-0b40bd7c59dbe4277\", \"Region\": \"eu-west-1\" }, \"ExpirationTime\": \"2023-11-08T13:26:30+00:00\", \"RequesterVpcInfo\": { \"CidrBlock\": \"10.0.17.0/24\", \"CidrBlockSet\": [ { \"CidrBlock\": \"10.0.17.0/24\" } ], \"OwnerId\": \"606671647913\", \"PeeringOptions\": { \"AllowDnsResolutionFromRemoteVpc\": false, \"AllowEgressFromLocalClassicLinkToRemoteVpc\": false, \"AllowEgressFromLocalVpcToRemoteClassicLink\": false }, \"VpcId\": \"vpc-0b721449398429559\", \"Region\": \"eu-west-1\" }, \"Status\": { \"Code\": \"initiating-request\", \"Message\": \"Initiating Request to 606671647913\" }, \"Tags\": [], \"VpcPeeringConnectionId\": \"pcx-0cb23d66dea3dca9f\" } }",
"aws ec2 wait vpc-peering-connection-exists --vpc-peering-connection-ids pcx-0cb23d66dea3dca9f",
"aws ec2 accept-vpc-peering-connection --vpc-peering-connection-id pcx-0cb23d66dea3dca9f --region eu-west-1",
"{ \"VpcPeeringConnection\": { \"AccepterVpcInfo\": { \"CidrBlock\": \"192.168.0.0/16\", \"CidrBlockSet\": [ { \"CidrBlock\": \"192.168.0.0/16\" } ], \"OwnerId\": \"606671647913\", \"PeeringOptions\": { \"AllowDnsResolutionFromRemoteVpc\": false, \"AllowEgressFromLocalClassicLinkToRemoteVpc\": false, \"AllowEgressFromLocalVpcToRemoteClassicLink\": false }, \"VpcId\": \"vpc-0b40bd7c59dbe4277\", \"Region\": \"eu-west-1\" }, \"RequesterVpcInfo\": { \"CidrBlock\": \"10.0.17.0/24\", \"CidrBlockSet\": [ { \"CidrBlock\": \"10.0.17.0/24\" } ], \"OwnerId\": \"606671647913\", \"PeeringOptions\": { \"AllowDnsResolutionFromRemoteVpc\": false, \"AllowEgressFromLocalClassicLinkToRemoteVpc\": false, \"AllowEgressFromLocalVpcToRemoteClassicLink\": false }, \"VpcId\": \"vpc-0b721449398429559\", \"Region\": \"eu-west-1\" }, \"Status\": { \"Code\": \"provisioning\", \"Message\": \"Provisioning\" }, \"Tags\": [], \"VpcPeeringConnectionId\": \"pcx-0cb23d66dea3dca9f\" } }",
"ROSA_PUBLIC_ROUTE_TABLE_ID=USD(aws ec2 describe-route-tables --filters \"Name=vpc-id,Values=vpc-0b721449398429559\" \"Name=association.main,Values=true\" \\ 1 --query \"RouteTables[*].RouteTableId\" --output text --region eu-west-1 ) aws ec2 create-route --route-table-id USD{ROSA_PUBLIC_ROUTE_TABLE_ID} --destination-cidr-block 192.168.0.0/16 \\ 2 --vpc-peering-connection-id pcx-0cb23d66dea3dca9f --region eu-west-1",
"AURORA_SECURITY_GROUP_ID=USD(aws ec2 describe-security-groups --filters \"Name=group-name,Values=keycloak-aurora-security-group\" --query \"SecurityGroups[*].GroupId\" --region eu-west-1 --output text ) aws ec2 authorize-security-group-ingress --group-id USD{AURORA_SECURITY_GROUP_ID} --protocol tcp --port 5432 --cidr 10.0.17.0/24 \\ 1 --region eu-west-1",
"{ \"Return\": true, \"SecurityGroupRules\": [ { \"SecurityGroupRuleId\": \"sgr-0785d2f04b9cec3f5\", \"GroupId\": \"sg-0d746cc8ad8d2e63b\", \"GroupOwnerId\": \"606671647913\", \"IsEgress\": false, \"IpProtocol\": \"tcp\", \"FromPort\": 5432, \"ToPort\": 5432, \"CidrIpv4\": \"10.0.17.0/24\" } ] }",
"USER=keycloak 1 PASSWORD=secret99 2 DATABASE=keycloak 3 HOST=USD(aws rds describe-db-clusters --db-cluster-identifier keycloak-aurora \\ 4 --query 'DBClusters[*].Endpoint' --region eu-west-1 --output text ) run -i --tty --rm debug --image=postgres:15 --restart=Never -- psql postgresql://USD{USER}:USD{PASSWORD}@USD{HOST}/USD{DATABASE}",
"apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: labels: app: keycloak name: keycloak namespace: keycloak spec: hostname: hostname: <KEYCLOAK_URL_HERE> resources: requests: cpu: \"2\" memory: \"1250M\" limits: cpu: \"6\" memory: \"2250M\" db: vendor: postgres url: jdbc:aws-wrapper:postgresql://<AWS_AURORA_URL_HERE>:5432/keycloak poolMinSize: 30 1 poolInitialSize: 30 poolMaxSize: 30 usernameSecret: name: keycloak-db-secret key: username passwordSecret: name: keycloak-db-secret key: password image: <KEYCLOAK_IMAGE_HERE> 2 startOptimized: false 3 features: enabled: - multi-site 4 transaction: xaEnabled: false 5 additionalOptions: - name: http-max-queued-requests value: \"1000\" - name: log-console-output value: json - name: metrics-enabled 6 value: 'true' - name: http-pool-max-threads 7 value: \"66\" - name: db-driver value: software.amazon.jdbc.Driver http: tlsSecret: keycloak-tls-secret instances: 3",
"wait --for=condition=Ready keycloaks.k8s.keycloak.org/keycloak wait --for=condition=RollingUpdate=False keycloaks.k8s.keycloak.org/keycloak",
"spec: additionalOptions: - name: http-max-queued-requests value: \"1000\"",
"spec: ingress: enabled: true annotations: # When running load tests, disable sticky sessions on the OpenShift HAProxy router # to avoid receiving all requests on a single Red Hat build of Keycloak Pod. haproxy.router.openshift.io/balance: roundrobin haproxy.router.openshift.io/disable_cookies: 'true'",
"credentials: - username: developer password: strong-password roles: - admin",
"apiVersion: v1 kind: Secret type: Opaque metadata: name: connect-secret namespace: keycloak data: identities.yaml: Y3JlZGVudGlhbHM6CiAgLSB1c2VybmFtZTogZGV2ZWxvcGVyCiAgICBwYXNzd29yZDogc3Ryb25nLXBhc3N3b3JkCiAgICByb2xlczoKICAgICAgLSBhZG1pbgo= 1",
"create secret generic connect-secret --from-file=identities.yaml",
"apiVersion: v1 kind: Secret metadata: name: ispn-xsite-sa-token 1 annotations: kubernetes.io/service-account.name: \"xsite-sa\" 2 type: kubernetes.io/service-account-token",
"create sa -n keycloak xsite-sa policy add-role-to-user view -n keycloak -z xsite-sa create -f xsite-sa-secret-token.yaml get secrets ispn-xsite-sa-token -o jsonpath=\"{.data.token}\" | base64 -d > Site-A-token.txt",
"create sa -n keycloak xsite-sa policy add-role-to-user view -n keycloak -z xsite-sa create -f xsite-sa-secret-token.yaml get secrets ispn-xsite-sa-token -o jsonpath=\"{.data.token}\" | base64 -d > Site-B-token.txt",
"create secret generic -n keycloak xsite-token-secret --from-literal=token=\"USD(cat Site-B-token.txt)\"",
"create secret generic -n keycloak xsite-token-secret --from-literal=token=\"USD(cat Site-A-token.txt)\"",
"-n keycloak create secret generic xsite-keystore-secret --from-file=keystore.p12=\"./certs/keystore.p12\" \\ 1 --from-literal=password=secret \\ 2 --from-literal=type=pkcs12 3",
"-n keycloak create secret generic xsite-truststore-secret --from-file=truststore.p12=\"./certs/truststore.p12\" \\ 1 --from-literal=password=caSecret \\ 2 --from-literal=type=pkcs12 3",
"apiVersion: infinispan.org/v1 kind: Infinispan metadata: name: infinispan 1 namespace: keycloak annotations: infinispan.org/monitoring: 'true' 2 spec: replicas: 3 security: endpointSecretName: connect-secret 3 service: type: DataGrid sites: local: name: site-a 4 expose: type: Route 5 maxRelayNodes: 128 encryption: transportKeyStore: secretName: xsite-keystore-secret 6 alias: xsite 7 filename: keystore.p12 8 routerKeyStore: secretName: xsite-keystore-secret 9 alias: xsite 10 filename: keystore.p12 11 trustStore: secretName: xsite-truststore-secret 12 filename: truststore.p12 13 locations: - name: site-b 14 clusterName: infinispan namespace: keycloak 15 url: openshift://api.site-b 16 secretName: xsite-token-secret 17",
"apiVersion: infinispan.org/v1 kind: Infinispan metadata: name: infinispan 1 namespace: keycloak annotations: infinispan.org/monitoring: 'true' 2 spec: replicas: 3 security: endpointSecretName: connect-secret 3 service: type: DataGrid sites: local: name: site-b 4 expose: type: Route 5 maxRelayNodes: 128 encryption: transportKeyStore: secretName: xsite-keystore-secret 6 alias: xsite 7 filename: keystore.p12 8 routerKeyStore: secretName: xsite-keystore-secret 9 alias: xsite 10 filename: keystore.p12 11 trustStore: secretName: xsite-truststore-secret 12 filename: truststore.p12 13 locations: - name: site-a 14 clusterName: infinispan namespace: keycloak 15 url: openshift://api.site-a 16 secretName: xsite-token-secret 17",
"apiVersion: infinispan.org/v2alpha1 kind: Cache metadata: name: sessions namespace: keycloak spec: clusterName: infinispan name: sessions template: |- distributedCache: mode: \"SYNC\" owners: \"2\" statistics: \"true\" remoteTimeout: 14000 stateTransfer: chunkSize: 16 backups: mergePolicy: ALWAYS_REMOVE 1 site-b: 2 backup: strategy: \"SYNC\" 3 timeout: 13000 stateTransfer: chunkSize: 16",
"apiVersion: infinispan.org/v2alpha1 kind: Cache metadata: name: sessions namespace: keycloak spec: clusterName: infinispan name: sessions template: |- distributedCache: mode: \"SYNC\" owners: \"2\" statistics: \"true\" remoteTimeout: 14000 stateTransfer: chunkSize: 16 backups: mergePolicy: ALWAYS_REMOVE 1 site-a: 2 backup: strategy: \"SYNC\" 3 timeout: 13000 stateTransfer: chunkSize: 16",
"wait --for condition=WellFormed --timeout=300s infinispans.infinispan.org -n keycloak infinispan",
"wait --for condition=CrossSiteViewFormed --timeout=300s infinispans.infinispan.org -n keycloak infinispan",
"apiVersion: v1 kind: Secret metadata: name: remote-store-secret namespace: keycloak type: Opaque data: username: ZGV2ZWxvcGVy # base64 encoding for 'developer' password: c2VjdXJlX3Bhc3N3b3Jk # base64 encoding for 'secure_password'",
"apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: labels: app: keycloak name: keycloak namespace: keycloak spec: additionalOptions: - name: cache-remote-host 1 value: \"infinispan.keycloak.svc\" - name: cache-remote-port 2 value: \"11222\" - name: cache-remote-username 3 secret: name: remote-store-secret key: username - name: cache-remote-password 4 secret: name: remote-store-secret key: password - name: spi-connections-infinispan-quarkus-site-name 5 value: keycloak",
"HOSTNAME=USD(oc -n openshift-ingress get svc router-default -o jsonpath='{.status.loadBalancer.ingress[].hostname}' ) aws elbv2 describe-load-balancers --query \"LoadBalancers[?DNSName=='USD{HOSTNAME}'].{CanonicalHostedZoneId:CanonicalHostedZoneId,DNSName:DNSName}\" --region eu-west-1 \\ 1 --output json",
"[ { \"CanonicalHostedZoneId\": \"Z2IFOLAFXWLO4F\", \"DNSName\": \"ad62c8d2fcffa4d54aec7ffff902c925-61f5d3e1cbdc5d42.elb.eu-west-1.amazonaws.com\" } ]",
"function createHealthCheck() { # Creating a hash of the caller reference to allow for names longer than 64 characters REF=(USD(echo USD1 | sha1sum )) aws route53 create-health-check --caller-reference \"USDREF\" --query \"HealthCheck.Id\" --no-cli-pager --output text --health-check-config ' { \"Type\": \"HTTPS\", \"ResourcePath\": \"/lb-check\", \"FullyQualifiedDomainName\": \"'USD1'\", \"Port\": 443, \"RequestInterval\": 30, \"FailureThreshold\": 1, \"EnableSNI\": true } ' } CLIENT_DOMAIN=\"client.keycloak-benchmark.com\" 1 PRIMARY_DOMAIN=\"primary.USD{CLIENT_DOMAIN}\" 2 BACKUP_DOMAIN=\"backup.USD{CLIENT_DOMAIN}\" 3 createHealthCheck USD{PRIMARY_DOMAIN} createHealthCheck USD{BACKUP_DOMAIN}",
"233e180f-f023-45a3-954e-415303f21eab 1 799e2cbb-43ae-4848-9b72-0d9173f04912 2",
"HOSTED_ZONE_ID=\"Z09084361B6LKQQRCVBEY\" 1 PRIMARY_LB_HOSTED_ZONE_ID=\"Z2IFOLAFXWLO4F\" PRIMARY_LB_DNS=ad62c8d2fcffa4d54aec7ffff902c925-61f5d3e1cbdc5d42.elb.eu-west-1.amazonaws.com PRIMARY_HEALTH_ID=233e180f-f023-45a3-954e-415303f21eab BACKUP_LB_HOSTED_ZONE_ID=\"Z2IFOLAFXWLO4F\" BACKUP_LB_DNS=a184a0e02a5d44a9194e517c12c2b0ec-1203036292.elb.eu-west-1.amazonaws.com BACKUP_HEALTH_ID=799e2cbb-43ae-4848-9b72-0d9173f04912 aws route53 change-resource-record-sets --hosted-zone-id Z09084361B6LKQQRCVBEY --query \"ChangeInfo.Id\" --output text --change-batch ' { \"Comment\": \"Creating Record Set for 'USD{CLIENT_DOMAIN}'\", \"Changes\": [{ \"Action\": \"CREATE\", \"ResourceRecordSet\": { \"Name\": \"'USD{PRIMARY_DOMAIN}'\", \"Type\": \"A\", \"AliasTarget\": { \"HostedZoneId\": \"'USD{PRIMARY_LB_HOSTED_ZONE_ID}'\", \"DNSName\": \"'USD{PRIMARY_LB_DNS}'\", \"EvaluateTargetHealth\": true } } }, { \"Action\": \"CREATE\", \"ResourceRecordSet\": { \"Name\": \"'USD{BACKUP_DOMAIN}'\", \"Type\": \"A\", \"AliasTarget\": { \"HostedZoneId\": \"'USD{BACKUP_LB_HOSTED_ZONE_ID}'\", \"DNSName\": \"'USD{BACKUP_LB_DNS}'\", \"EvaluateTargetHealth\": true } } }, { \"Action\": \"CREATE\", \"ResourceRecordSet\": { \"Name\": \"'USD{CLIENT_DOMAIN}'\", \"Type\": \"A\", \"SetIdentifier\": \"client-failover-primary-'USD{SUBDOMAIN}'\", \"Failover\": \"PRIMARY\", \"HealthCheckId\": \"'USD{PRIMARY_HEALTH_ID}'\", \"AliasTarget\": { \"HostedZoneId\": \"'USD{HOSTED_ZONE_ID}'\", \"DNSName\": \"'USD{PRIMARY_DOMAIN}'\", \"EvaluateTargetHealth\": true } } }, { \"Action\": \"CREATE\", \"ResourceRecordSet\": { \"Name\": \"'USD{CLIENT_DOMAIN}'\", \"Type\": \"A\", \"SetIdentifier\": \"client-failover-backup-'USD{SUBDOMAIN}'\", \"Failover\": \"SECONDARY\", \"HealthCheckId\": \"'USD{BACKUP_HEALTH_ID}'\", \"AliasTarget\": { \"HostedZoneId\": \"'USD{HOSTED_ZONE_ID}'\", \"DNSName\": \"'USD{BACKUP_DOMAIN}'\", \"EvaluateTargetHealth\": true } } }] } '",
"/change/C053410633T95FR9WN3YI",
"aws route53 wait resource-record-sets-changed --id /change/C053410633T95FR9WN3YI",
"apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: keycloak spec: hostname: hostname: USD{CLIENT_DOMAIN} 1",
"cat <<EOF | oc apply -n USDNAMESPACE -f - 1 apiVersion: route.openshift.io/v1 kind: Route metadata: name: aws-health-route spec: host: USDDOMAIN 2 port: targetPort: https tls: insecureEdgeTerminationPolicy: Redirect termination: passthrough to: kind: Service name: keycloak-service weight: 100 wildcardPolicy: None EOF",
"-n keycloak exec -it pods/infinispan-0 -- ./bin/cli.sh --trustall --connect https://127.0.0.1:11222",
"Username: developer Password: [infinispan-0-29897@ISPN//containers/default]>",
"site take-offline --all-caches --site=site-a",
"{ \"offlineClientSessions\" : \"ok\", \"authenticationSessions\" : \"ok\", \"sessions\" : \"ok\", \"clientSessions\" : \"ok\", \"work\" : \"ok\", \"offlineSessions\" : \"ok\", \"loginFailures\" : \"ok\", \"actionTokens\" : \"ok\" }",
"site status --all-caches --site=site-a",
"{ \"status\" : \"offline\" }",
"aws rds failover-db-cluster --db-cluster-identifier",
"-n keycloak exec -it pods/infinispan-0 -- ./bin/cli.sh --trustall --connect https://127.0.0.1:11222",
"Username: developer Password: [infinispan-0-29897@ISPN//containers/default]>",
"site take-offline --all-caches --site=site-a",
"{ \"offlineClientSessions\" : \"ok\", \"authenticationSessions\" : \"ok\", \"sessions\" : \"ok\", \"clientSessions\" : \"ok\", \"work\" : \"ok\", \"offlineSessions\" : \"ok\", \"loginFailures\" : \"ok\", \"actionTokens\" : \"ok\" }",
"site status --all-caches --site=site-a",
"{ \"status\" : \"offline\" }",
"clearcache actionTokens clearcache authenticationSessions clearcache clientSessions clearcache loginFailures clearcache offlineClientSessions clearcache offlineSessions clearcache sessions clearcache work",
"site bring-online --all-caches --site=site-a",
"{ \"offlineClientSessions\" : \"ok\", \"authenticationSessions\" : \"ok\", \"sessions\" : \"ok\", \"clientSessions\" : \"ok\", \"work\" : \"ok\", \"offlineSessions\" : \"ok\", \"loginFailures\" : \"ok\", \"actionTokens\" : \"ok\" }",
"site status --all-caches --site=site-a",
"{ \"status\" : \"online\" }",
"-n keycloak exec -it pods/infinispan-0 -- ./bin/cli.sh --trustall --connect https://127.0.0.1:11222",
"Username: developer Password: [infinispan-0-29897@ISPN//containers/default]>",
"site push-site-state --all-caches --site=site-b",
"{ \"offlineClientSessions\" : \"ok\", \"authenticationSessions\" : \"ok\", \"sessions\" : \"ok\", \"clientSessions\" : \"ok\", \"work\" : \"ok\", \"offlineSessions\" : \"ok\", \"loginFailures\" : \"ok\", \"actionTokens\" : \"ok\" }",
"site status --all-caches --site=site-b",
"{ \"status\" : \"online\" }",
"site push-site-status --cache=actionTokens site push-site-status --cache=authenticationSessions site push-site-status --cache=clientSessions site push-site-status --cache=loginFailures site push-site-status --cache=offlineClientSessions site push-site-status --cache=offlineSessions site push-site-status --cache=sessions site push-site-status --cache=work",
"{ \"site-b\" : \"OK\" } { \"site-b\" : \"OK\" } { \"site-b\" : \"OK\" } { \"site-b\" : \"OK\" } { \"site-b\" : \"OK\" } { \"site-b\" : \"OK\" } { \"site-b\" : \"OK\" } { \"site-b\" : \"OK\" }",
"site push-site-state --cache=<cache-name> --site=site-b",
"site clear-push-site-status --cache=actionTokens site clear-push-site-status --cache=authenticationSessions site clear-push-site-status --cache=clientSessions site clear-push-site-status --cache=loginFailures site clear-push-site-status --cache=offlineClientSessions site clear-push-site-status --cache=offlineSessions site clear-push-site-status --cache=sessions site clear-push-site-status --cache=work",
"\"ok\" \"ok\" \"ok\" \"ok\" \"ok\" \"ok\" \"ok\" \"ok\"",
"-n keycloak exec -it pods/infinispan-0 -- ./bin/cli.sh --trustall --connect https://127.0.0.1:11222",
"Username: developer Password: [infinispan-0-29897@ISPN//containers/default]>",
"site take-offline --all-caches --site=site-b",
"{ \"offlineClientSessions\" : \"ok\", \"authenticationSessions\" : \"ok\", \"sessions\" : \"ok\", \"clientSessions\" : \"ok\", \"work\" : \"ok\", \"offlineSessions\" : \"ok\", \"loginFailures\" : \"ok\", \"actionTokens\" : \"ok\" }",
"site status --all-caches --site=site-b",
"{ \"status\" : \"offline\" }",
"clearcache actionTokens clearcache authenticationSessions clearcache clientSessions clearcache loginFailures clearcache offlineClientSessions clearcache offlineSessions clearcache sessions clearcache work",
"site bring-online --all-caches --site=site-b",
"{ \"offlineClientSessions\" : \"ok\", \"authenticationSessions\" : \"ok\", \"sessions\" : \"ok\", \"clientSessions\" : \"ok\", \"work\" : \"ok\", \"offlineSessions\" : \"ok\", \"loginFailures\" : \"ok\", \"actionTokens\" : \"ok\" }",
"site status --all-caches --site=site-b",
"{ \"status\" : \"online\" }",
"-n keycloak exec -it pods/infinispan-0 -- ./bin/cli.sh --trustall --connect https://127.0.0.1:11222",
"Username: developer Password: [infinispan-0-29897@ISPN//containers/default]>",
"site push-site-state --all-caches --site=site-a",
"{ \"offlineClientSessions\" : \"ok\", \"authenticationSessions\" : \"ok\", \"sessions\" : \"ok\", \"clientSessions\" : \"ok\", \"work\" : \"ok\", \"offlineSessions\" : \"ok\", \"loginFailures\" : \"ok\", \"actionTokens\" : \"ok\" }",
"site status --all-caches --site=site-a",
"{ \"status\" : \"online\" }",
"site push-site-status --cache=actionTokens site push-site-status --cache=authenticationSessions site push-site-status --cache=clientSessions site push-site-status --cache=loginFailures site push-site-status --cache=offlineClientSessions site push-site-status --cache=offlineSessions site push-site-status --cache=sessions site push-site-status --cache=work",
"{ \"site-a\" : \"OK\" } { \"site-a\" : \"OK\" } { \"site-a\" : \"OK\" } { \"site-a\" : \"OK\" } { \"site-a\" : \"OK\" } { \"site-a\" : \"OK\" } { \"site-a\" : \"OK\" } { \"site-a\" : \"OK\" }",
"site push-site-state --cache=<cache-name> --site=site-a",
"site clear-push-site-status --cache=actionTokens site clear-push-site-status --cache=authenticationSessions site clear-push-site-status --cache=clientSessions site clear-push-site-status --cache=loginFailures site clear-push-site-status --cache=offlineClientSessions site clear-push-site-status --cache=offlineSessions site clear-push-site-status --cache=sessions site clear-push-site-status --cache=work",
"\"ok\" \"ok\" \"ok\" \"ok\" \"ok\" \"ok\" \"ok\" \"ok\"",
"aws rds failover-db-cluster --db-cluster-identifier",
"apiVersion: infinispan.org/v2alpha1 kind: Batch metadata: name: take-offline namespace: keycloak 1 spec: cluster: infinispan 2 config: | 3 site take-offline --all-caches --site=site-a site status --all-caches --site=site-a",
"-n keycloak wait --for=jsonpath='{.status.phase}'=Succeeded Batch/take-offline"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/24.0/html-single/high_availability_guide/%7Blinks_server_db_url%7D
|
Chapter 2. The Cargo build tool
|
Chapter 2. The Cargo build tool Cargo is a build tool and front end for the Rust compiler rustc as well as a package and dependency manager. It allows Rust projects to declare dependencies with specific version requirements, resolves the full dependency graph, downloads packages, and builds as well as tests your entire project. Rust Toolset is distributed with Cargo 1.75.0. 2.1. The Cargo directory structure and file placements The Cargo build tool uses set conventions for defining the directory structure and file placement within a Cargo package. Running the cargo new command generates the package directory structure and templates for both a manifest and a project file. By default, it also initializes a new Git repository in the package root directory. For a binary program, Cargo creates a directory project_name containing a text file named Cargo.toml and a subdirectory src containing a text file named main.rs . Additional resources For more information on the Cargo directory structure, see The Cargo Book - Package Layout . For in-depth information about Rust code organization, see The Rust Programming Language - Managing Growing Projects with Packages, Crates, and Modules . 2.2. Creating a Rust project Create a new Rust project that is set up according to the Cargo conventions. For more information on Cargo conventions, see Cargo directory structure and file placements . Procedure Create a Rust project by running the following command: On Red Hat Enterprise Linux 8: Replace < project_name > with your project name. On Red Hat Enterprise Linux 9: Replace < project_name > with your project name. Note To edit the project code, edit the main executable file main.rs and add new source files to the src subdirectory. Additional resources For information on configuring your project and adding dependencies, see Configuring Rust project dependencies . 2.3. Creating a Rust library project Complete the following steps to create a Rust library project using the Cargo build tool. Procedure To create a Rust library project, run the following command: On Red Hat Enterprise Linux 8: Replace < project_name > with the name of your Rust project. On Red Hat Enterprise Linux 9: Replace < project_name > with the name of your Rust project. Note To edit the project code, edit the source file, lib.rs , in the src subdirectory. Additional resources Managing Growing Projects with Packages, Crates, and Modules 2.4. Building a Rust project Build your Rust project using the Cargo build tool. Cargo resolves all dependencies of your project, downloads missing dependencies, and compiles it using the rustc compiler. By default, projects are built and compiled in debug mode. For information on compiling your project in release mode, see Building a Rust project in release mode . Prerequisites An existing Rust project. For information on how to create a Rust project, see Creating a Rust project . Procedure To build a Rust project managed by Cargo, run in the project directory: On Red Hat Enterprise Linux 8: On Red Hat Enterprise Linux 9: To verify that your Rust program can be built when you do not need to build an executable file, run: 2.5. Building a Rust project in release mode Build your Rust project in release mode using the Cargo build tool. Release mode is optimizing your source code and can therefore increase compilation time while ensuring that the compiled binary will run faster. Use this mode to produce optimized artifacts suitable for release and production. Cargo resolves all dependencies of your project, downloads missing dependencies, and compiles it using the rustc compiler. For information on compiling your project in debug mode, see Building a Rust project . Prerequisites An existing Rust project. For information on how to create a Rust project, see Creating a Rust project . Procedure To build the project in release mode, run: On Red Hat Enterprise Linux 8: On Red Hat Enterprise Linux 9: To verify that your Rust program can be build when you do not need to build an executable file, run: 2.6. Running a Rust program Run your Rust project using the Cargo build tool. Cargo first rebuilds your project and then runs the resulting executable file. If used during development, the cargo run command correctly resolves the output path independently of the build mode. Prerequisites A built Rust project. For information on how to build a Rust project, see Building a Rust project . Procedure To run a Rust program managed as a project by Cargo, run in the project directory: On Red Hat Enterprise Linux 8: On Red Hat Enterprise Linux 9: Note If your program has not been built yet, Cargo builds your program before running it. 2.7. Testing a Rust project Test your Rust program using the Cargo build tool. Cargo first rebuilds your project and then runs the tests found in the project. Note that you can only test functions that are free, monomorphic, and take no arguments. The function return type must be either () or Result<(), E> where E: Error . By default, Rust projects are tested in debug mode. For information on testing your project in release mode, see Testing a Rust project in release mode . Prerequisites A built Rust project. For information on how to build a Rust project, see Building a Rust project . Procedure Add the test attribute #[test] in front of your function. To run tests for a Rust project managed by Cargo, run in the project directory: On Red Hat Enterprise Linux 8: On Red Hat Enterprise Linux 9: Additional resources For more information on performing tests in your Rust project, see The Rust Reference - Testing attributes . 2.8. Testing a Rust project in release mode Test your Rust program in release mode using the Cargo build tool. Release mode is optimizing your source code and can therefore increase compilation time while ensuring that the compiled binary will run faster. Use this mode to produce optimized artifacts suitable for release and production. Cargo first rebuilds your project and then runs the tests found in the project. Note that you can only test functions that are free, monomorphic, and take no arguments. The function return type must be either () or Result<(), E> where E: Error . For information on testing your project in debug mode, see Testing a Rust project . Prerequisites A built Rust project. For information on how to build a Rust project, see Building a Rust project . Procedure Add the test attribute #[test] in front of your function. To run tests for a Rust project managed by Cargo in release mode, run in the project directory: On Red Hat Enterprise Linux 8: On Red Hat Enterprise Linux 9: Additional resources For more information on performing tests in your Rust project, see The Rust Reference - Testing attributes . 2.9. Configuring Rust project dependencies Configure the dependencies of your Rust project using the Cargo build tool. To specify dependencies for a project managed by Cargo, edit the file Cargo.toml in the project directory and rebuild your project. Cargo downloads the Rust code packages and their dependencies, stores them locally, builds all of the project source code including the dependency code packages, and runs the resulting executable. Prerequisites A built Rust project. For information on how to build a Rust project, see Building a Rust project . Procedure In your project directory, open the file Cargo.toml . Move to the section labelled [dependencies] . Each dependency is listed on a new line in the following format: Rust code packages are called crates. Edit your dependencies. Rebuild your project by running: On Red Hat Enterprise Linux 8: On Red Hat Enterprise Linux 9: Run your project by using the following command: On Red Hat Enterprise Linux 8: On Red Hat Enterprise Linux 9: Additional resources For more information on configuring Rust dependencies, see The Cargo Book - Specifying Dependencies . 2.10. Building documentation for a Rust project Use the Cargo tool to generate documentation from comments in your source code that are marked for extraction. Note that documentation comments are extracted only for public functions, variables, and members. Prerequisites A built Rust project. For information on how to build a Rust project, see Building a Rust project . Configured dependencies. For more information on configuring dependencies, see Configuring Rust project dependencies . Procedure To mark comments for extraction, use three slashes /// and place your comment in the beginning of the line it is documenting. Cargo supports the Markdown language for your comments. To build project documentation using Cargo, run in the project directory: On Red Hat Enterprise Linux 8: On Red Hat Enterprise Linux 9: The generated documentation is located in the .target/doc directory. Additional resources For more information on building documentation using Cargo, see The Rust Programming Language - Making Useful Documentation Comments . 2.11. Compiling code into a WebAssembly binary with Rust on Red Hat Enterprise Linux 8 and Red Hat Enterprise Linux 9 Beta Complete the following steps to install the WebAssembly standard library. Prerequisites Rust Toolset is installed. For more information, see Installing Rust Toolset . Procedure To install the WebAssembly standard library, run: On Red Hat Enterprise Linux 8: On Red Hat Enterprise Linux 9: To use WebAssembly with Cargo, run: On Red Hat Enterprise Linux 8: Replace < command > with the Cargo command you want to run. On Red Hat Enterprise Linux 9: Replace < command > with the Cargo command you want to run. Additional resources For more information on WebAssembly, see the official Rust and WebAssembly documentation or the Rust and WebAssembly book. 2.12. Vendoring Rust project dependencies Create a local copy of the dependencies of your Rust project for offline redistribution and reuse using the Cargo build tool. This procedure is called vendoring project dependencies. The vendored dependencies including Rust code packages for building your project on a Windows operating system are located in the vendor directory. Vendored dependencies can be used by Cargo without any connection to the internet. Prerequisites A built Rust project. For information on how to build a Rust project, see Building a Rust project . Configured dependencies. For more information on configuring dependencies, see Configuring Rust project dependencies . Procedure To vendor your Rust project with dependencies using Cargo, run in the project directory: On Red Hat Enterprise Linux 8: On Red Hat Enterprise Linux 9: 2.13. Additional resources For more information on Cargo, see the Official Cargo Guide . To display the manual page included in Rust Toolset, run: For Red Hat Enterprise Linux 8: For Red Hat Enterprise Linux 9:
|
[
"cargo new --bin < project_name >",
"cargo new --bin < project_name >",
"cargo new --lib < project_name >",
"cargo new --lib < project_name >",
"cargo build",
"cargo build",
"cargo check",
"cargo build --release",
"cargo build --release",
"cargo check",
"cargo run",
"cargo run",
"cargo test",
"cargo test",
"cargo test --release",
"cargo test --release",
"crate_name = version",
"cargo build",
"cargo build",
"cargo run",
"cargo run",
"cargo doc --no-deps",
"cargo doc --no-deps",
"yum install rust-std-static-wasm32-unknown-unknown",
"dnf install rust-std-static-wasm32-unknown-unknown",
"cargo < command > --target wasm32-unknown-unknown",
"cargo < command > --target wasm32-unknown-unknown",
"cargo vendor",
"cargo vendor",
"man cargo",
"man cargo"
] |
https://docs.redhat.com/en/documentation/red_hat_developer_tools/1/html/using_rust_1.75.0_toolset/assembly_the-cargo-build-tool
|
Chapter 5. Getting started with OptaPlanner in Business Central: An employee rostering example
|
Chapter 5. Getting started with OptaPlanner in Business Central: An employee rostering example You can build and deploy the employee-rostering sample project in Business Central. The project demonstrates how to create each of the Business Central assets required to solve the shift rostering planning problem and use Red Hat build of OptaPlanner to find the best possible solution. You can deploy the preconfigured employee-rostering project in Business Central. Alternatively, you can create the project yourself using Business Central. Note The employee-rostering sample project in Business Central does not include a data set. You must supply a data set in XML format using a REST API call. 5.1. Deploying the employee rostering sample project in Business Central Business Central includes a number of sample projects that you can use to get familiar with the product and its features. The employee rostering sample project is designed and created to demonstrate the shift rostering use case for Red Hat build of OptaPlanner. Use the following procedure to deploy and run the employee rostering sample in Business Central. Prerequisites Red Hat Process Automation Manager has been downloaded and installed. For installation options, see Planning a Red Hat Process Automation Manager installation . You have started Red Hat Process Automation Manager, as described the installation documentation, and you are logged in to Business Central as a user with admin permissions. Procedure In Business Central, click Menu Design Projects . In the preconfigured MySpace space, click Try Samples . Select employee-rostering from the list of sample projects and click Ok in the upper-right corner to import the project. After the asset list has complied, click Build & Deploy to deploy the employee rostering example. The rest of this document explains each of the project assets and their configuration. 5.2. Re-creating the employee rostering sample project The employee rostering sample project is a preconfigured project available in Business Central. You can learn about how to deploy this project in Section 5.1, "Deploying the employee rostering sample project in Business Central" . You can create the employee rostering example "from scratch". You can use the workflow in this example to create a similar project of your own in Business Central. 5.2.1. Setting up the employee rostering project To start developing a solver in Business Central, you must set up the project. Prerequisites Red Hat Process Automation Manager has been downloaded and installed. You have deployed Business Central and logged in with a user that has the admin role. Procedure Create a new project in Business Central by clicking Menu Design Projects Add Project . In the Add Project window, fill out the following fields: Name : employee-rostering Description (optional): Employee rostering problem optimization using OptaPlanner. Assigns employees to shifts based on their skill. Optional: Click Configure Advanced Options to populate the Group ID , Artifact ID , and Version information. Group ID : employeerostering Artifact ID : employeerostering Version : 1.0.0-SNAPSHOT Click Add to add the project to the Business Central project repository. 5.2.2. Problem facts and planning entities Each of the domain classes in the employee rostering planning problem is categorized as one of the following: An unrelated class: not used by any of the score constraints. From a planning standpoint, this data is obsolete. A problem fact class: used by the score constraints, but does not change during planning (as long as the problem stays the same), for example, Shift and Employee . All the properties of a problem fact class are problem properties. A planning entity class: used by the score constraints and changes during planning, for example, ShiftAssignment . The properties that change during planning are planning variables . The other properties are problem properties. Ask yourself the following questions: What class changes during planning? Which class has variables that I want the Solver to change? That class is a planning entity. A planning entity class needs to be annotated with the @PlanningEntity annotation, or defined in Business Central using the Red Hat build of OptaPlanner dock in the domain designer. Each planning entity class has one or more planning variables , and must also have one or more defining properties. Most use cases have only one planning entity class, and only one planning variable per planning entity class. 5.2.3. Creating the data model for the employee rostering project Use this section to create the data objects required to run the employee rostering sample project in Business Central. Prerequisites You have completed the project setup described in Section 5.2.1, "Setting up the employee rostering project" . Procedure With your new project, either click Data Object in the project perspective, or click Add Asset Data Object to create a new data object. Name the first data object Timeslot , and select employeerostering.employeerostering as the Package . Click Ok . In the Data Objects perspective, click +add field to add fields to the Timeslot data object. In the id field, type endTime . Click the drop-down menu to Type and select LocalDateTime . Click Create and continue to add another field. Add another field with the id startTime and Type LocalDateTime . Click Create . Click Save in the upper-right corner to save the Timeslot data object. Click the x in the upper-right corner to close the Data Objects perspective and return to the Assets menu. Using the steps, create the following data objects and their attributes: Table 5.1. Skill id Type name String Table 5.2. Employee id Type name String skills employeerostering.employeerostering.Skill[List] Table 5.3. Shift id Type requiredSkill employeerostering.employeerostering.Skill timeslot employeerostering.employeerostering.Timeslot Table 5.4. DayOffRequest id Type date LocalDate employee employeerostering.employeerostering.Employee Table 5.5. ShiftAssignment id Type employee employeerostering.employeerostering.Employee shift employeerostering.employeerostering.Shift For more examples of creating data objects, see Getting started with decision services . 5.2.3.1. Creating the employee roster planning entity In order to solve the employee rostering planning problem, you must create a planning entity and a solver. The planning entity is defined in the domain designer using the attributes available in the Red Hat build of OptaPlanner dock. Use the following procedure to define the ShiftAssignment data object as the planning entity for the employee rostering example. Prerequisites You have created the relevant data objects and planning entity required to run the employee rostering example by completing the procedures in Section 5.2.3, "Creating the data model for the employee rostering project" . Procedure From the project Assets menu, open the ShiftAssignment data object. In the Data Objects perspective, open the OptaPlanner dock by clicking the on the right. Select Planning Entity . Select employee from the list of fields under the ShiftAssignment data object. In the OptaPlanner dock, select Planning Variable . In the Value Range Id input field, type employeeRange . This adds the @ValueRangeProvider annotation to the planning entity, which you can view by clicking the Source tab in the designer. The value range of a planning variable is defined with the @ValueRangeProvider annotation. A @ValueRangeProvider annotation always has a property id , which is referenced by the @PlanningVariable property valueRangeProviderRefs . Close the dock and click Save to save the data object. 5.2.3.2. Creating the employee roster planning solution The employee roster problem relies on a defined planning solution. The planning solution is defined in the domain designer using the attributes available in the Red Hat build of OptaPlanner dock. Prerequisites You have created the relevant data objects and planning entity required to run the employee rostering example by completing the procedures in Section 5.2.3, "Creating the data model for the employee rostering project" and Section 5.2.3.1, "Creating the employee roster planning entity" . Procedure Create a new data object with the identifier EmployeeRoster . Create the following fields: Table 5.6. EmployeeRoster id Type dayOffRequestList employeerostering.employeerostering.DayOffRequest[List] shiftAssignmentList employeerostering.employeerostering.ShiftAssignment[List] shiftList employeerostering.employeerostering.Shift[List] skillList employeerostering.employeerostering.Skill[List] timeslotList employeerostering.employeerostering.Timeslot[List] In the Data Objects perspective, open the OptaPlanner dock by clicking the on the right. Select Planning Solution . Leave the default Hard soft score as the Solution Score Type . This automatically generates a score field in the EmployeeRoster data object with the solution score as the type. Add a new field with the following attributes: id Type employeeList employeerostering.employeerostering.Employee[List] With the employeeList field selected, open the OptaPlanner dock and select the Planning Value Range Provider box. In the id field, type employeeRange . Close the dock. Click Save in the upper-right corner to save the asset. 5.2.4. Employee rostering constraints Employee rostering is a planning problem. All planning problems include constraints that must be satisfied in order to find an optimal solution. The employee rostering sample project in Business Central includes the following hard and soft constraints: Hard constraint Employees are only assigned one shift per day. All shifts that require a particular employee skill are assigned an employee with that particular skill. Soft constraints All employees are assigned a shift. If an employee requests a day off, their shift is reassigned to another employee. Hard and soft constraints are defined in Business Central using either the free-form DRL designer, or using guided rules. 5.2.4.1. DRL (Drools Rule Language) rules DRL (Drools Rule Language) rules are business rules that you define directly in .drl text files. These DRL files are the source in which all other rule assets in Business Central are ultimately rendered. You can create and manage DRL files within the Business Central interface, or create them externally as part of a Maven or Java project using Red Hat CodeReady Studio or another integrated development environment (IDE). A DRL file can contain one or more rules that define at a minimum the rule conditions ( when ) and actions ( then ). The DRL designer in Business Central provides syntax highlighting for Java, DRL, and XML. DRL files consist of the following components: Components in a DRL file The following example DRL rule determines the age limit in a loan application decision service: Example rule for loan application age limit A DRL file can contain single or multiple rules, queries, and functions, and can define resource declarations such as imports, globals, and attributes that are assigned and used by your rules and queries. The DRL package must be listed at the top of a DRL file and the rules are typically listed last. All other DRL components can follow any order. Each rule must have a unique name within the rule package. If you use the same rule name more than once in any DRL file in the package, the rules fail to compile. Always enclose rule names with double quotation marks ( rule "rule name" ) to prevent possible compilation errors, especially if you use spaces in rule names. All data objects related to a DRL rule must be in the same project package as the DRL file in Business Central. Assets in the same package are imported by default. Existing assets in other packages can be imported with the DRL rule. 5.2.4.2. Defining constraints for employee rostering using the DRL designer You can create constraint definitions for the employee rostering example using the free-form DRL designer in Business Central. Use this procedure to create a hard constraint where no employee is assigned a shift that begins less than 10 hours after their shift ended. Procedure In Business Central, go to Menu Design Projects and click the project name. Click Add Asset DRL file . In the DRL file name field, type ComplexScoreRules . Select the employeerostering.employeerostering package. Click +Ok to create the DRL file. In the Model tab of the DRL designer, define the Employee10HourShiftSpace rule as a DRL file: Click Save to save the DRL file. For more information about creating DRL files, see Designing a decision service using DRL rules . 5.2.5. Creating rules for employee rostering using guided rules You can create rules that define hard and soft constraints for employee rostering using the guided rules designer in Business Central. 5.2.5.1. Guided rules Guided rules are business rules that you create in a UI-based guided rules designer in Business Central that leads you through the rule-creation process. The guided rules designer provides fields and options for acceptable input based on the data objects for the rule being defined. The guided rules that you define are compiled into Drools Rule Language (DRL) rules as with all other rule assets. All data objects related to a guided rule must be in the same project package as the guided rule. Assets in the same package are imported by default. After you create the necessary data objects and the guided rule, you can use the Data Objects tab of the guided rules designer to verify that all required data objects are listed or to import other existing data objects by adding a New item . 5.2.5.2. Creating a guided rule to balance employee shift numbers The BalanceEmployeesShiftNumber guided rule creates a soft constraint that ensures shifts are assigned to employees in a way that is balanced as evenly as possible. It does this by creating a score penalty that increases when shift distribution is less even. The score formula, implemented by the rule, incentivizes the Solver to distribute shifts in a more balanced way. Procedure In Business Central, go to Menu Design Projects and click the project name. Click Add Asset Guided Rule . Enter BalanceEmployeesShiftNumber as the Guided Rule name and select the employeerostering.employeerostering Package . Click Ok to create the rule asset. Add a WHEN condition by clicking the in the WHEN field. Select Employee in the Add a condition to the rule window. Click +Ok . Click the Employee condition to modify the constraints and add the variable name USDemployee . Add the WHEN condition From Accumulate . Above the From Accumulate condition, click click to add pattern and select Number as the fact type from the drop-down list. Add the variable name USDshiftCount to the Number condition. Below the From Accumulate condition, click click to add pattern and select the ShiftAssignment fact type from the drop-down list. Add the variable name USDshiftAssignment to the ShiftAssignment fact type. Click the ShiftAssignment condition again and from the Add a restriction on a field drop-down list, select employee . Select equal to from the drop-down list to the employee constraint. Click the icon to the drop-down button to add a variable, and click Bound variable in the Field value window. Select USDemployee from the drop-down list. In the Function box type count(USDshiftAssignment) . Add the THEN condition by clicking the in the THEN field. Select Modify Soft Score in the Add a new action window. Click +Ok . Type the following expression into the box: -(USDshiftCount.intValue()*USDshiftCount.intValue()) Click Validate in the upper-right corner to check all rule conditions are valid. If the rule validation fails, address any problems described in the error message, review all components in the rule, and try again to validate the rule until the rule passes. Click Save to save the rule. For more information about creating guided rules, see Designing a decision service using guided rules . 5.2.5.3. Creating a guided rule for no more than one shift per day The OneEmployeeShiftPerDay guided rule creates a hard constraint that employees are not assigned more than one shift per day. In the employee rostering example, this constraint is created using the guided rule designer. Procedure In Business Central, go to Menu Design Projects and click the project name. Click Add Asset Guided Rule . Enter OneEmployeeShiftPerDay as the Guided Rule name and select the employeerostering.employeerostering Package . Click Ok to create the rule asset. Add a WHEN condition by clicking the in the WHEN field. Select Free form DRL from the Add a condition to the rule window. In the free form DRL box, type the following condition: USDshiftAssignment : ShiftAssignment( employee != null ) ShiftAssignment( this != USDshiftAssignment , employee == USDshiftAssignment.employee , shift.timeslot.startTime.toLocalDate() == USDshiftAssignment.shift.timeslot.startTime.toLocalDate() ) This condition states that a shift cannot be assigned to an employee that already has another shift assignment on the same day. Add the THEN condition by clicking the in the THEN field. Select Add free form DRL from the Add a new action window. In the free form DRL box, type the following condition: scoreHolder.addHardConstraintMatch(kcontext, -1); Click Validate in the upper-right corner to check all rule conditions are valid. If the rule validation fails, address any problems described in the error message, review all components in the rule, and try again to validate the rule until the rule passes. Click Save to save the rule. For more information about creating guided rules, see Designing a decision service using guided rules . 5.2.5.4. Creating a guided rule to match skills to shift requirements The ShiftReqiredSkillsAreMet guided rule creates a hard constraint that ensures all shifts are assigned an employee with the correct set of skills. In the employee rostering example, this constraint is created using the guided rule designer. Procedure In Business Central, go to Menu Design Projects and click the project name. Click Add Asset Guided Rule . Enter ShiftReqiredSkillsAreMet as the Guided Rule name and select the employeerostering.employeerostering Package . Click Ok to create the rule asset. Add a WHEN condition by clicking the in the WHEN field. Select ShiftAssignment in the Add a condition to the rule window. Click +Ok . Click the ShiftAssignment condition, and select employee from the Add a restriction on a field drop-down list. In the designer, click the drop-down list to employee and select is not null . Click the ShiftAssignment condition, and click Expression editor . In the designer, click [not bound] to open the Expression editor , and bind the expression to the variable USDrequiredSkill . Click Set . In the designer, to USDrequiredSkill , select shift from the first drop-down list, then requiredSkill from the drop-down list. Click the ShiftAssignment condition, and click Expression editor . In the designer, to [not bound] , select employee from the first drop-down list, then skills from the drop-down list. Leave the drop-down list as Choose . In the drop-down box, change please choose to excludes . Click the icon to excludes , and in the Field value window, click the New formula button. Type USDrequiredSkill into the formula box. Add the THEN condition by clicking the in the THEN field. Select Modify Hard Score in the Add a new action window. Click +Ok . Type -1 into the score actions box. Click Validate in the upper-right corner to check all rule conditions are valid. If the rule validation fails, address any problems described in the error message, review all components in the rule, and try again to validate the rule until the rule passes. Click Save to save the rule. For more information about creating guided rules, see Designing a decision service using guided rules . 5.2.5.5. Creating a guided rule to manage day off requests The DayOffRequest guided rule creates a soft constraint. This constraint allows a shift to be reassigned to another employee in the event the employee who was originally assigned the shift is no longer able to work that day. In the employee rostering example, this constraint is created using the guided rule designer. Procedure In Business Central, go to Menu Design Projects and click the project name. Click Add Asset Guided Rule . Enter DayOffRequest as the Guided Rule name and select the employeerostering.employeerostering Package . Click Ok to create the rule asset. Add a WHEN condition by clicking the in the WHEN field. Select Free form DRL from the Add a condition to the rule window. In the free form DRL box, type the following condition: USDdayOffRequest : DayOffRequest( ) ShiftAssignment( employee == USDdayOffRequest.employee , shift.timeslot.startTime.toLocalDate() == USDdayOffRequest.date ) This condition states if a shift is assigned to an employee who has made a day off request, the employee can be unassigned the shift on that day. Add the THEN condition by clicking the in the THEN field. Select Add free form DRL from the Add a new action window. In the free form DRL box, type the following condition: scoreHolder.addSoftConstraintMatch(kcontext, -100); Click Validate in the upper-right corner to check all rule conditions are valid. If the rule validation fails, address any problems described in the error message, review all components in the rule, and try again to validate the rule until the rule passes. Click Save to save the rule. For more information about creating guided rules, see Designing a decision service using guided rules . 5.2.6. Creating a solver configuration for employee rostering You can create and edit Solver configurations in Business Central. The Solver configuration designer creates a solver configuration that can be run after the project is deployed. Prerequisites Red Hat Process Automation Manager has been downloaded and installed. You have created and configured all of the relevant assets for the employee rostering example. Procedure In Business Central, click Menu Projects , and click your project to open it. In the Assets perspective, click Add Asset Solver configuration In the Create new Solver configuration window, type the name EmployeeRosteringSolverConfig for your Solver and click Ok . This opens the Solver configuration designer. In the Score Director Factory configuration section, define a KIE base that contains scoring rule definitions. The employee rostering sample project uses defaultKieBase . Select one of the KIE sessions defined within the KIE base. The employee rostering sample project uses defaultKieSession . Click Validate in the upper-right corner to check the Score Director Factory configuration is correct. If validation fails, address any problems described in the error message, and try again to validate until the configuration passes. Click Save to save the Solver configuration. 5.2.7. Configuring Solver termination for the employee rostering project You can configure the Solver to terminate after a specified amount of time. By default, the planning engine is given an unlimited time period to solve a problem instance. The employee rostering sample project is set up to run for 30 seconds. Prerequisites You have created all relevant assets for the employee rostering project and created the EmployeeRosteringSolverConfig solver configuration in Business Central as described in Section 5.2.6, "Creating a solver configuration for employee rostering" . Procedure Open the EmployeeRosteringSolverConfig from the Assets perspective. This will open the Solver configuration designer. In the Termination section, click Add to create new termination element within the selected logical group. Select the Time spent termination type from the drop-down list. This is added as an input field in the termination configuration. Use the arrows to the time elements to adjust the amount of time spent to 30 seconds. Click Validate in the upper-right corner to check the Score Director Factory configuration is correct. If validation fails, address any problems described in the error message, and try again to validate until the configuration passes. Click Save to save the Solver configuration. 5.3. Accessing the solver using the REST API After deploying or re-creating the sample solver, you can access it using the REST API. You must register a solver instance using the REST API. Then you can supply data sets and retrieve optimized solutions. Prerequisites The employee rostering project is set up and deployed according to the sections in this document. You can either deploy the sample project, as described in Section 5.1, "Deploying the employee rostering sample project in Business Central" , or re-create the project, as described in Section 5.2, "Re-creating the employee rostering sample project" . 5.3.1. Registering the Solver using the REST API You must register the solver instance using the REST API before you can use the solver. Each solver instance is capable of optimizing one planning problem at a time. Procedure Create a HTTP request using the following header: Register the Solver using the following request: PUT http://localhost:8080/kie-server/services/rest/server/containers/employeerostering_1.0.0-SNAPSHOT/solvers/EmployeeRosteringSolver Request body <solver-instance> <solver-config-file>employeerostering/employeerostering/EmployeeRosteringSolverConfig.solver.xml</solver-config-file> </solver-instance> 5.3.2. Calling the Solver using the REST API After registering the solver instance, you can use the REST API to submit a data set to the solver and to retrieve an optimized solution. Procedure Create a HTTP request using the following header: Submit a request to the Solver with a data set, as in the following example: POST http://localhost:8080/kie-server/services/rest/server/containers/employeerostering_1.0.0-SNAPSHOT/solvers/EmployeeRosteringSolver/state/solving Request body <employeerostering.employeerostering.EmployeeRoster> <employeeList> <employeerostering.employeerostering.Employee> <name>John</name> <skills> <employeerostering.employeerostering.Skill> <name>reading</name> </employeerostering.employeerostering.Skill> </skills> </employeerostering.employeerostering.Employee> <employeerostering.employeerostering.Employee> <name>Mary</name> <skills> <employeerostering.employeerostering.Skill> <name>writing</name> </employeerostering.employeerostering.Skill> </skills> </employeerostering.employeerostering.Employee> <employeerostering.employeerostering.Employee> <name>Petr</name> <skills> <employeerostering.employeerostering.Skill> <name>speaking</name> </employeerostering.employeerostering.Skill> </skills> </employeerostering.employeerostering.Employee> </employeeList> <shiftList> <employeerostering.employeerostering.Shift> <timeslot> <startTime>2017-01-01T00:00:00</startTime> <endTime>2017-01-01T01:00:00</endTime> </timeslot> <requiredSkill reference="../../../employeeList/employeerostering.employeerostering.Employee/skills/employeerostering.employeerostering.Skill"/> </employeerostering.employeerostering.Shift> <employeerostering.employeerostering.Shift> <timeslot reference="../../employeerostering.employeerostering.Shift/timeslot"/> <requiredSkill reference="../../../employeeList/employeerostering.employeerostering.Employee[3]/skills/employeerostering.employeerostering.Skill"/> </employeerostering.employeerostering.Shift> <employeerostering.employeerostering.Shift> <timeslot reference="../../employeerostering.employeerostering.Shift/timeslot"/> <requiredSkill reference="../../../employeeList/employeerostering.employeerostering.Employee[2]/skills/employeerostering.employeerostering.Skill"/> </employeerostering.employeerostering.Shift> </shiftList> <skillList> <employeerostering.employeerostering.Skill reference="../../employeeList/employeerostering.employeerostering.Employee/skills/employeerostering.employeerostering.Skill"/> <employeerostering.employeerostering.Skill reference="../../employeeList/employeerostering.employeerostering.Employee[3]/skills/employeerostering.employeerostering.Skill"/> <employeerostering.employeerostering.Skill reference="../../employeeList/employeerostering.employeerostering.Employee[2]/skills/employeerostering.employeerostering.Skill"/> </skillList> <timeslotList> <employeerostering.employeerostering.Timeslot reference="../../shiftList/employeerostering.employeerostering.Shift/timeslot"/> </timeslotList> <dayOffRequestList/> <shiftAssignmentList> <employeerostering.employeerostering.ShiftAssignment> <shift reference="../../../shiftList/employeerostering.employeerostering.Shift"/> </employeerostering.employeerostering.ShiftAssignment> <employeerostering.employeerostering.ShiftAssignment> <shift reference="../../../shiftList/employeerostering.employeerostering.Shift[3]"/> </employeerostering.employeerostering.ShiftAssignment> <employeerostering.employeerostering.ShiftAssignment> <shift reference="../../../shiftList/employeerostering.employeerostering.Shift[2]"/> </employeerostering.employeerostering.ShiftAssignment> </shiftAssignmentList> </employeerostering.employeerostering.EmployeeRoster> Request the best solution to the planning problem: GET http://localhost:8080/kie-server/services/rest/server/containers/employeerostering_1.0.0-SNAPSHOT/solvers/EmployeeRosteringSolver/bestsolution Example response <solver-instance> <container-id>employee-rostering</container-id> <solver-id>solver1</solver-id> <solver-config-file>employeerostering/employeerostering/EmployeeRosteringSolverConfig.solver.xml</solver-config-file> <status>NOT_SOLVING</status> <score scoreClass="org.optaplanner.core.api.score.buildin.hardsoft.HardSoftScore">0hard/0soft</score> <best-solution class="employeerostering.employeerostering.EmployeeRoster"> <employeeList> <employeerostering.employeerostering.Employee> <name>John</name> <skills> <employeerostering.employeerostering.Skill> <name>reading</name> </employeerostering.employeerostering.Skill> </skills> </employeerostering.employeerostering.Employee> <employeerostering.employeerostering.Employee> <name>Mary</name> <skills> <employeerostering.employeerostering.Skill> <name>writing</name> </employeerostering.employeerostering.Skill> </skills> </employeerostering.employeerostering.Employee> <employeerostering.employeerostering.Employee> <name>Petr</name> <skills> <employeerostering.employeerostering.Skill> <name>speaking</name> </employeerostering.employeerostering.Skill> </skills> </employeerostering.employeerostering.Employee> </employeeList> <shiftList> <employeerostering.employeerostering.Shift> <timeslot> <startTime>2017-01-01T00:00:00</startTime> <endTime>2017-01-01T01:00:00</endTime> </timeslot> <requiredSkill reference="../../../employeeList/employeerostering.employeerostering.Employee/skills/employeerostering.employeerostering.Skill"/> </employeerostering.employeerostering.Shift> <employeerostering.employeerostering.Shift> <timeslot reference="../../employeerostering.employeerostering.Shift/timeslot"/> <requiredSkill reference="../../../employeeList/employeerostering.employeerostering.Employee[3]/skills/employeerostering.employeerostering.Skill"/> </employeerostering.employeerostering.Shift> <employeerostering.employeerostering.Shift> <timeslot reference="../../employeerostering.employeerostering.Shift/timeslot"/> <requiredSkill reference="../../../employeeList/employeerostering.employeerostering.Employee[2]/skills/employeerostering.employeerostering.Skill"/> </employeerostering.employeerostering.Shift> </shiftList> <skillList> <employeerostering.employeerostering.Skill reference="../../employeeList/employeerostering.employeerostering.Employee/skills/employeerostering.employeerostering.Skill"/> <employeerostering.employeerostering.Skill reference="../../employeeList/employeerostering.employeerostering.Employee[3]/skills/employeerostering.employeerostering.Skill"/> <employeerostering.employeerostering.Skill reference="../../employeeList/employeerostering.employeerostering.Employee[2]/skills/employeerostering.employeerostering.Skill"/> </skillList> <timeslotList> <employeerostering.employeerostering.Timeslot reference="../../shiftList/employeerostering.employeerostering.Shift/timeslot"/> </timeslotList> <dayOffRequestList/> <shiftAssignmentList/> <score>0hard/0soft</score> </best-solution> </solver-instance>
|
[
"package import function // Optional query // Optional declare // Optional global // Optional rule \"rule name\" // Attributes when // Conditions then // Actions end rule \"rule2 name\"",
"rule \"Underage\" salience 15 agenda-group \"applicationGroup\" when USDapplication : LoanApplication() Applicant( age < 21 ) then USDapplication.setApproved( false ); USDapplication.setExplanation( \"Underage\" ); end",
"package employeerostering.employeerostering; rule \"Employee10HourShiftSpace\" when USDshiftAssignment : ShiftAssignment( USDemployee : employee != null, USDshiftEndDateTime : shift.timeslot.endTime) ShiftAssignment( this != USDshiftAssignment, USDemployee == employee, USDshiftEndDateTime <= shift.timeslot.endTime, USDshiftEndDateTime.until(shift.timeslot.startTime, java.time.temporal.ChronoUnit.HOURS) <10) then scoreHolder.addHardConstraintMatch(kcontext, -1); end",
"USDshiftAssignment : ShiftAssignment( employee != null ) ShiftAssignment( this != USDshiftAssignment , employee == USDshiftAssignment.employee , shift.timeslot.startTime.toLocalDate() == USDshiftAssignment.shift.timeslot.startTime.toLocalDate() )",
"scoreHolder.addHardConstraintMatch(kcontext, -1);",
"USDdayOffRequest : DayOffRequest( ) ShiftAssignment( employee == USDdayOffRequest.employee , shift.timeslot.startTime.toLocalDate() == USDdayOffRequest.date )",
"scoreHolder.addSoftConstraintMatch(kcontext, -100);",
"authorization: admin:admin X-KIE-ContentType: xstream content-type: application/xml",
"<solver-instance> <solver-config-file>employeerostering/employeerostering/EmployeeRosteringSolverConfig.solver.xml</solver-config-file> </solver-instance>",
"authorization: admin:admin X-KIE-ContentType: xstream content-type: application/xml",
"<employeerostering.employeerostering.EmployeeRoster> <employeeList> <employeerostering.employeerostering.Employee> <name>John</name> <skills> <employeerostering.employeerostering.Skill> <name>reading</name> </employeerostering.employeerostering.Skill> </skills> </employeerostering.employeerostering.Employee> <employeerostering.employeerostering.Employee> <name>Mary</name> <skills> <employeerostering.employeerostering.Skill> <name>writing</name> </employeerostering.employeerostering.Skill> </skills> </employeerostering.employeerostering.Employee> <employeerostering.employeerostering.Employee> <name>Petr</name> <skills> <employeerostering.employeerostering.Skill> <name>speaking</name> </employeerostering.employeerostering.Skill> </skills> </employeerostering.employeerostering.Employee> </employeeList> <shiftList> <employeerostering.employeerostering.Shift> <timeslot> <startTime>2017-01-01T00:00:00</startTime> <endTime>2017-01-01T01:00:00</endTime> </timeslot> <requiredSkill reference=\"../../../employeeList/employeerostering.employeerostering.Employee/skills/employeerostering.employeerostering.Skill\"/> </employeerostering.employeerostering.Shift> <employeerostering.employeerostering.Shift> <timeslot reference=\"../../employeerostering.employeerostering.Shift/timeslot\"/> <requiredSkill reference=\"../../../employeeList/employeerostering.employeerostering.Employee[3]/skills/employeerostering.employeerostering.Skill\"/> </employeerostering.employeerostering.Shift> <employeerostering.employeerostering.Shift> <timeslot reference=\"../../employeerostering.employeerostering.Shift/timeslot\"/> <requiredSkill reference=\"../../../employeeList/employeerostering.employeerostering.Employee[2]/skills/employeerostering.employeerostering.Skill\"/> </employeerostering.employeerostering.Shift> </shiftList> <skillList> <employeerostering.employeerostering.Skill reference=\"../../employeeList/employeerostering.employeerostering.Employee/skills/employeerostering.employeerostering.Skill\"/> <employeerostering.employeerostering.Skill reference=\"../../employeeList/employeerostering.employeerostering.Employee[3]/skills/employeerostering.employeerostering.Skill\"/> <employeerostering.employeerostering.Skill reference=\"../../employeeList/employeerostering.employeerostering.Employee[2]/skills/employeerostering.employeerostering.Skill\"/> </skillList> <timeslotList> <employeerostering.employeerostering.Timeslot reference=\"../../shiftList/employeerostering.employeerostering.Shift/timeslot\"/> </timeslotList> <dayOffRequestList/> <shiftAssignmentList> <employeerostering.employeerostering.ShiftAssignment> <shift reference=\"../../../shiftList/employeerostering.employeerostering.Shift\"/> </employeerostering.employeerostering.ShiftAssignment> <employeerostering.employeerostering.ShiftAssignment> <shift reference=\"../../../shiftList/employeerostering.employeerostering.Shift[3]\"/> </employeerostering.employeerostering.ShiftAssignment> <employeerostering.employeerostering.ShiftAssignment> <shift reference=\"../../../shiftList/employeerostering.employeerostering.Shift[2]\"/> </employeerostering.employeerostering.ShiftAssignment> </shiftAssignmentList> </employeerostering.employeerostering.EmployeeRoster>",
"<solver-instance> <container-id>employee-rostering</container-id> <solver-id>solver1</solver-id> <solver-config-file>employeerostering/employeerostering/EmployeeRosteringSolverConfig.solver.xml</solver-config-file> <status>NOT_SOLVING</status> <score scoreClass=\"org.optaplanner.core.api.score.buildin.hardsoft.HardSoftScore\">0hard/0soft</score> <best-solution class=\"employeerostering.employeerostering.EmployeeRoster\"> <employeeList> <employeerostering.employeerostering.Employee> <name>John</name> <skills> <employeerostering.employeerostering.Skill> <name>reading</name> </employeerostering.employeerostering.Skill> </skills> </employeerostering.employeerostering.Employee> <employeerostering.employeerostering.Employee> <name>Mary</name> <skills> <employeerostering.employeerostering.Skill> <name>writing</name> </employeerostering.employeerostering.Skill> </skills> </employeerostering.employeerostering.Employee> <employeerostering.employeerostering.Employee> <name>Petr</name> <skills> <employeerostering.employeerostering.Skill> <name>speaking</name> </employeerostering.employeerostering.Skill> </skills> </employeerostering.employeerostering.Employee> </employeeList> <shiftList> <employeerostering.employeerostering.Shift> <timeslot> <startTime>2017-01-01T00:00:00</startTime> <endTime>2017-01-01T01:00:00</endTime> </timeslot> <requiredSkill reference=\"../../../employeeList/employeerostering.employeerostering.Employee/skills/employeerostering.employeerostering.Skill\"/> </employeerostering.employeerostering.Shift> <employeerostering.employeerostering.Shift> <timeslot reference=\"../../employeerostering.employeerostering.Shift/timeslot\"/> <requiredSkill reference=\"../../../employeeList/employeerostering.employeerostering.Employee[3]/skills/employeerostering.employeerostering.Skill\"/> </employeerostering.employeerostering.Shift> <employeerostering.employeerostering.Shift> <timeslot reference=\"../../employeerostering.employeerostering.Shift/timeslot\"/> <requiredSkill reference=\"../../../employeeList/employeerostering.employeerostering.Employee[2]/skills/employeerostering.employeerostering.Skill\"/> </employeerostering.employeerostering.Shift> </shiftList> <skillList> <employeerostering.employeerostering.Skill reference=\"../../employeeList/employeerostering.employeerostering.Employee/skills/employeerostering.employeerostering.Skill\"/> <employeerostering.employeerostering.Skill reference=\"../../employeeList/employeerostering.employeerostering.Employee[3]/skills/employeerostering.employeerostering.Skill\"/> <employeerostering.employeerostering.Skill reference=\"../../employeeList/employeerostering.employeerostering.Employee[2]/skills/employeerostering.employeerostering.Skill\"/> </skillList> <timeslotList> <employeerostering.employeerostering.Timeslot reference=\"../../shiftList/employeerostering.employeerostering.Shift/timeslot\"/> </timeslotList> <dayOffRequestList/> <shiftAssignmentList/> <score>0hard/0soft</score> </best-solution> </solver-instance>"
] |
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_solvers_with_red_hat_build_of_optaplanner_in_red_hat_process_automation_manager/workbench-er-tutorial-con
|
Chapter 33. File Systems
|
Chapter 33. File Systems kernel component, BZ#1172496 Due to a bug in the ext4 code, it is currently impossible to resize ext4 file systems that have 1 kilobyte block size and are smaller than 32 megabytes.
| null |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.1_release_notes/known-issues-filesystems
|
Chapter 1. Configuring your Red Hat build of Quarkus applications by using a properties file
|
Chapter 1. Configuring your Red Hat build of Quarkus applications by using a properties file As an application developer, you can use Red Hat build of Quarkus to create microservices-based applications written in Java that run on OpenShift and serverless environments. Applications compiled to native executables have small memory footprints and fast startup times. You can configure your Quarkus application by using either of the following methods: Setting properties in the application.properties file Applying structured configuration in YAML format by updating the application.yaml file You can also extend and customize the configuration for your application by doing the following: Substituting and composing configuration property values by using property expressions Implementing MicroProfile-compliant classes with custom configuration source converters that read configuration values from different external sources Using configuration profiles to keep separate sets of configuration values for your development, test, and production environments The procedures include configuration examples that are created by using the Quarkus config-quickstart exercise. Prerequisites You have installed OpenJDK 17 or 21 and set the JAVA_HOME environment variable to specify the location of the Java SDK. To download the Red Hat build of OpenJDK, log in to the Red Hat Customer Portal and go to Software Downloads . You have installed Apache Maven 3.8.6 or later. Download Maven from the Apache Maven Project website. You have configured Maven to use artifacts from the Quarkus Maven repository . To learn how to configure Maven settings, see Getting started with Quarkus . 1.1. Configuration options You can manage your application's settings in a single configuration file. Additionally, you can define configuration profiles to group related settings for different environments, such as development, testing, or production. This way, you can easily switch between profiles and apply environment-specific properties without altering your main configuration file. By default, Quarkus reads properties from the application.properties file located in the src/main/resources directory. If, instead, you prefer to configure and manage application properties in an application.yaml file, add the quarkus-config-yaml dependency to your project's pom.xml file. For more information, see Adding YAML configuration support . Red Hat build of Quarkus also supports MicroProfile Config, which you can use to load your application's configuration from various sources. By using the MicroProfile Config specification from the Eclipse MicroProfile project, you can inject configuration properties into your application and access them by using methods defined in your code. Quarkus can read application properties from different origins, including: The file system A database A Kubernetes or OpenShift Container Platform ConfigMap or Secret object Any source that a Java application can load 1.2. Creating the configuration quickstart project With the config-quickstart project, you can get up and running with a simple Quarkus application by using Apache Maven and the Quarkus Maven plugin. The following procedure describes how you can create a Quarkus Maven project. Prerequisites You have installed OpenJDK 17 or 21 and set the JAVA_HOME environment variable to specify the location of the Java SDK. To download Red Hat build of OpenJDK, log in to the Red Hat Customer Portal and go to Software Downloads . You have installed Apache Maven 3.8.6 or later. Download Maven from the Apache Maven Project website. Procedure Verify that Maven uses OpenJDK 17 or 21 and that the Maven version is 3.8.6 or later: mvn --version If the mvn command does not return OpenJDK 17 or 21, ensure that the directory where OpenJDK 17 or 21 is installed on your system is included in the PATH environment variable: export PATH=USDPATH:<path_to_JDK> Enter the following command to generate the project: mvn com.redhat.quarkus.platform:quarkus-maven-plugin:3.15.3.SP1-redhat-00002:create \ -DprojectGroupId=org.acme \ -DprojectArtifactId=config-quickstart \ -DplatformGroupId=com.redhat.quarkus.platform \ -DplatformVersion=3.15.3.SP1-redhat-00002 \ -DclassName="org.acme.config.GreetingResource" \ -Dpath="/greeting" cd config-quickstart Verification The preceding mvn command creates the following items in the config-quickstart directory: The Maven project directory structure An org.acme.config.GreetingResource resource A landing page that you can access at http://localhost:8080 after you start the application Associated unit tests for testing your application in native mode and JVM mode Example Dockerfile.jvm and Dockerfile.native files in the src/main/docker subdirectory The application configuration file Note Alternatively, you can download a Quarkus Maven project to use in this tutorial from the Quarkus Quickstarts archive or clone the Quarkus Quickstarts Git repository. The Quarkus config-quickstart exercise is located in the config-quickstart directory. 1.3. Injecting configuration values into your Red Hat build of Quarkus application Red Hat build of Quarkus uses the Configuration for MicroProfile feature to inject configuration data into the application. You can access the configuration by using context and dependency injection (CDI) or by defining a method in your code. Use the @ConfigProperty annotation to map an object property to a key in the MicroProfile Config Sources file of your application. The following procedure and examples show how you can inject an individual property configuration into a Quarkus config-quickstart project by using the Red Hat build of Quarkus Application configuration file, application.properties . Note You can use the MicroProfile Config properties file ( src/main/resources/META-INF/microprofile-config.properties ) just like the application.properties file. However, using application.properties is the preferred method. Prerequisites You have created the Quarkus config-quickstart project. Note For a completed example of that project, download the Quarkus Quickstarts archive or clone the Quarkus Quickstarts Git repository and go to the config-quickstart directory. Procedure Open the src/main/resources/application.properties file. Add configuration properties to your configuration file where <property_name> is the property name and <value> is the value of the property: <property_name>=<value> The following example shows how to set the values for the greeting.message and the greeting.name properties in the Quarkus config-quickstart project: Example application.properties file greeting.message=hello greeting.name=quarkus Important When you are configuring your applications, do not prefix application-specific properties with the string quarkus . The quarkus prefix is reserved for configuring Quarkus at the framework level. Using quarkus as a prefix for application-specific properties might lead to unexpected results when your application runs. Review the GreetingResource.java Java file in your project. The file contains the GreetingResource class with the hello() method that returns a message when you send an HTTP request on the /greeting endpoint: Example GreetingResource.java file package org.acme.config; import java.util.Optional; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import jakarta.ws.rs.core.MediaType; import org.eclipse.microprofile.config.inject.ConfigProperty; @Path("/greeting") public class GreetingResource { String message; Optional<String> name; String suffix; @GET @Produces(MediaType.TEXT_PLAIN) public String hello() { return message + " " + name.orElse("world") + suffix; } } In the example provided, the values of the message and name strings in the hello() method are not initialized. The application throws a NullPointerException when the endpoint is called and starts successfully in this state. Define the message , name , and suffix fields, and annotate them with @ConfigProperty , matching the values that you defined for the greeting.message and greeting.name properties. Use the @ConfigProperty annotation to inject the configuration value for each string. For example: Example GreetingResource.java file package org.acme.config; import java.util.Optional; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import jakarta.ws.rs.core.MediaType; import org.eclipse.microprofile.config.inject.ConfigProperty; @Path("/greeting") public class GreetingResource { @ConfigProperty(name = "greeting.message") 1 String message; @ConfigProperty(name = "greeting.suffix", defaultValue="!") 2 String suffix; @ConfigProperty(name = "greeting.name") Optional<String> name; 3 @GET @Produces(MediaType.TEXT_PLAIN) public String hello() { return message + " " + name.orElse("world") + suffix; } } 1 If you do not configure a value for the greeting.message string, the application fails and throws the following exception: jakarta.enterprise.inject.spi.DeploymentException: io.quarkus.runtime.configuration.ConfigurationException: Failed to load config value of type class java.lang.String for: greeting.message 2 If you do not configure a value for the greeting.suffix , Quarkus resolves it to the default value. 3 If you do not define the greeting.name property, the value of name is not available. Your application still runs even when this value is not available because you set the Optional parameter on name . Note To inject a configured value, you can use @ConfigProperty . You do not need to include the @Inject annotation for members that you annotate with @ConfigProperty . Compile and start your application in development mode: ./mvnw quarkus:dev Enter the following command in a new terminal window to verify that the endpoint returns the message: curl http://localhost:8080/greeting This command returns the following output: hello quarkus! To stop the application, press Ctrl+C. 1.4. Updating the functional test to validate configuration changes Before you test the functionality of your application, you must update the functional test to reflect the changes that you made to the endpoint of your application. The following procedure shows how you can update your testHelloEndpoint method on the Quarkus config-quickstart project. Procedure Open the GreetingResourceTest.java file. Update the content of the testHelloEndpoint method: package org.acme.config; import io.quarkus.test.junit.QuarkusTest; import org.junit.jupiter.api.Test; import static io.restassured.RestAssured.given; import static org.hamcrest.CoreMatchers.is; @QuarkusTest public class GreetingResourceTest { @Test public void testHelloEndpoint() { given() .when().get("/greeting") .then() .statusCode(200) .body(is("hello quarkus!")); // Modified line } } Compile and start your application in development mode: ./mvnw quarkus:dev To start running the tests, press r on your keyboard. 1.5. Setting configuration properties By default, Quarkus reads properties from the application.properties file that is in the src/main/resources directory. If you change build properties, ensure that you repackage your application. Quarkus configures most properties during build time. Extensions can define properties as overridable at runtime, for example, the database URL, a user name, and a password, which can be specific to your target environment. Prerequisites You have created the Quarkus config-quickstart project. You have defined the greeting.message and greeting.name properties in the application.properties file of your project. Procedure To package your Quarkus project, enter the following command: ./mvnw clean package Use one of the following methods to set the configuration properties: Setting system properties: Enter the following command, where <property_name> is the name of the configuration property that you want to add and <value> is the value of the property: java -D<property_name>=<value> -jar target/quarkus-app/quarkus-run.jar For example, to set the value of the greeting.suffix property to ? , enter the following command: java -Dgreeting.suffix=? -jar target/quarkus-app/quarkus-run.jar Setting environment variables: Enter the following command, where <property_name> is the name of the configuration property that you want to set and <value> is the value of the property: export <property_name>=<value> ; java -jar target/quarkus-app/quarkus-run.jar Note Environment variable names follow the conversion rules of Eclipse MicroProfile . Convert the name to upper case and replace any character that is not alphanumeric with an underscore ( _ ). Using an environment file: Create a .env file in your current working directory and add configuration properties, where <PROPERTY_NAME> is the property name and <value> is the value of the property: <PROPERTY_NAME>=<value> Note In development mode, this file is in the root directory of your project. Do not track the file in version control. If you create a .env file in the root directory of your project, you can define keys and values that the program reads as properties. Using the application.properties file: Place the configuration file in the USDPWD/config/application.properties directory where the application runs so that any runtime properties that are defined in that file override the default configuration. Note You can also use the config/application.properties features in development mode. Place the config/application.properties file inside the target directory. Any cleaning operation from the build tool, for example, mvn clean , also removes the config directory. 1.6. Advanced configuration mapping The following advanced mapping procedures are extensions that are specific to Red Hat build of Quarkus and are outside of the MicroProfile Config specification. 1.6.1. Annotating an interface with @ConfigMapping Instead of individually injecting multiple related configuration values, use the @io.smallrye.config.ConfigMapping annotation to group configuration properties. The following procedure describes how you can use the @ConfigMapping annotation on the Quarkus config-quickstart project. Prerequisites You have created the Quarkus config-quickstart project. You have defined the greeting.message and greeting.name properties in the application.properties file of your project. Procedure Review the GreetingResource.java file in your project and ensure that it contains the contents that are shown in the following example. To use the @ConfigProperty annotation to inject configuration properties from another configuration source into this class, you must import the java.util.Optional and org.eclipse.microprofile.config.inject.ConfigProperty packages. Example GreetingResource.java file package org.acme.config; import java.util.Optional; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import jakarta.ws.rs.core.MediaType; import org.eclipse.microprofile.config.inject.ConfigProperty; @Path("/greeting") public class GreetingResource { @ConfigProperty(name = "greeting.message") String message; @ConfigProperty(name = "greeting.suffix", defaultValue="!") String suffix; @ConfigProperty(name = "greeting.name") Optional<String> name; @GET @Produces(MediaType.TEXT_PLAIN) public String hello() { return message + " " + name.orElse("world") + suffix; } } Create a GreetingConfiguration.java file in the src/main/java/org/acme/config directory. Add the import statements for ConfigMapping and Optional to the file: Example GreetingConfiguration.java file package org.acme.config; import io.smallrye.config.ConfigMapping; import io.smallrye.config.WithDefault; import java.util.Optional; @ConfigMapping(prefix = "greeting") 1 public interface GreetingConfiguration { String message(); @WithDefault("!") 2 String suffix(); Optional<String> name(); } 1 The prefix property is optional. For example, in this scenario, the prefix is greeting . 2 If greeting.suffix is not set, ! is used as the default value. Inject the GreetingConfiguration instance into the GreetingResource class by using the @Inject annotation, as follows: Note This snippet replaces the three fields that are annotated with @ConfigProperty that are in the initial version of the config-quickstart project. Example GreetingResource.java file package org.acme.config; import jakarta.inject.Inject; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import jakarta.ws.rs.core.MediaType; @Path("/greeting") public class GreetingResource { @Inject GreetingConfiguration config; @GET @Produces(MediaType.TEXT_PLAIN) public String hello() { return config.message() + " " + config.name().orElse("world") + config.suffix(); } } Compile and start your application in development mode: ./mvnw quarkus:dev Important If you do not give values for the class properties, the application fails to compile, and an io.smallrye.config.ConfigValidationException error is returned to indicate that a value is missing. This does not apply to optional fields or fields with a default value. To verify that the endpoint returns the message, enter the following command in a new terminal window: curl http://localhost:8080/greeting You receive the following message: hello quarkus! To stop the application, press Ctrl+C. 1.6.2. Using nested object configuration You can define an interface that is nested inside another interface. This procedure shows how to create and configure a nested interface in the Quarkus config-quickstart project. Prerequisites You have created the Quarkus config-quickstart project. You have defined the greeting.message and greeting.name properties in the application.properties file of your project. Procedure Review the GreetingResource.java in your project. The file contains the GreetingResource class with the hello() method that returns a message when you send an HTTP request on the /greeting endpoint: Example GreetingResource.java file package org.acme.config; import java.util.Optional; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import jakarta.ws.rs.core.MediaType; import org.eclipse.microprofile.config.inject.ConfigProperty; @Path("/greeting") public class GreetingResource { @ConfigProperty(name = "greeting.message") String message; @ConfigProperty(name = "greeting.suffix", defaultValue="!") String suffix; @ConfigProperty(name = "greeting.name") Optional<String> name; @GET @Produces(MediaType.TEXT_PLAIN) public String hello() { return message + " " + name.orElse("world") + suffix; } } Create a GreetingConfiguration.java class file with the GreetingConfiguration instance. This class contains the externalized configuration for the hello() method that is defined in the GreetingResource class: Example GreetingConfiguration.java file package org.acme.config; import io.smallrye.config.ConfigMapping; import io.smallrye.config.WithDefault; import java.util.Optional; @ConfigMapping(prefix = "greeting") public interface GreetingConfiguration { String message(); @WithDefault("!") String suffix(); Optional<String> name(); } Create the ContentConfig interface that is nested inside the GreetingConfiguration instance, as shown in the following example: Example GreetingConfiguration.java file package org.acme.config; import io.smallrye.config.ConfigMapping; import io.smallrye.config.WithDefault; import java.util.List; import java.util.Optional; @ConfigMapping(prefix = "greeting") public interface GreetingConfiguration { String message(); @WithDefault("!") String suffix(); Optional<String> name(); ContentConfig content(); interface ContentConfig { Integer prizeAmount(); List<String> recipients(); } } Note The method name of the ContentConfig interface is content . To ensure that you bind the properties to the correct interface, when you define configuration properties for this class, use content in the prefix. In doing so, you can also prevent property name conflicts and unexpected application behavior. Define the greeting.content.prize-amount and greeting.content.recipients configuration properties in your application.properties file. The following example shows the properties defined for the GreetingConfiguration instance and the ContentConfig interface: Example application.properties file greeting.message=hello greeting.name=quarkus greeting.content.prize-amount=10 greeting.content.recipients=Jane,John Instead of the three @ConfigProperty field annotations, inject the GreetingConfiguration instance into the GreetingResource class by using the @Inject annotation, as outlined in the following example. Also, you must update the message string that the /greeting endpoint returns with the values that you set for the new greeting.content.prize-amount and greeting.content.recipients properties that you added. Example GreetingResource.java file package org.acme.config; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import jakarta.ws.rs.core.MediaType; import jakarta.inject.Inject; @Path("/greeting") public class GreetingResource { @Inject GreetingConfiguration config; @GET @Produces(MediaType.TEXT_PLAIN) public String hello() { return config.message() + " " + config.name().orElse("world") + config.suffix() + "\n" + config.content().recipients() + " receive total of candies: " + config.content().prizeAmount(); } } Compile and start your application in development mode: ./mvnw quarkus:dev Important If you do not provide values for the class properties, the application fails to compile and you receive a jakarta.enterprise.inject.spi.DeploymentException exception that indicates a missing value. This does not apply to Optional fields and fields with a default value. To verify that the endpoint returns the message, open a new terminal window and enter the following command: curl http://localhost:8080/greeting A message displays, containing two lines of output. The first line displays the greeting, and the second line reports the recipients of the prize together with the prize amount, as follows: hello quarkus! [Jane, John] receive total of candies: 10 To stop the application, press Ctrl+C. Note You can annotate classes that are annotated with @ConfigMapping with bean validation annotations similar to the following example: @ConfigMapping(prefix = "greeting") public class GreetingConfiguration { @Size(min = 20) public String message; public String suffix = "!"; } Your project must include the quarkus-hibernate-validator dependency. 1.7. Accessing the configuration programmatically You can define a method in your code to retrieve the values of the configuration properties in your application. In doing so, you can dynamically look up configuration property values or retrieve configuration property values from classes that are either CDI beans or Jakarta REST (formerly known as JAX-RS) resources. You can access the configuration by using the org.eclipse.microprofile.config.ConfigProvider.getConfig() method. The getValue() method of the config object returns the values of the configuration properties. Prerequisites You have a Quarkus Maven project. Procedure Use a method to access the value of a configuration property of any class or object in your application code. Depending on whether or not the value that you want to retrieve is set in a configuration source in your project, you can use one of the following methods: To access the value of a property that is set in a configuration source in your project, for example, in the application.properties file, use the getValue() method: String <variable-name> = ConfigProvider.getConfig().getValue(" <property-name> ", <data-type-class-name> .class); For example, to retrieve the value of the greeting.message property that has the data type String , and is assigned to the message variable in your code, use the following syntax: String message = ConfigProvider.getConfig().getValue("greeting.message",String.class); When you want to retrieve a value that is optional or default and might not be defined in your application.properties file or another configuration source in your application, use the getOptionalValue() method: Optional<String> <variable-name> = ConfigProvider.getConfig().getOptionalValue(" <property-name> ", <data-type-class-name> .class); For example, to retrieve the value of the greeting.name property that is optional, has the data type String , and is assigned to the name variable in your code, use the following syntax: Optional<String> name = ConfigProvider.getConfig().getOptionalValue("greeting.name", String.class); The following snippet shows a variant of the aforementioned GreetingResource class by using the programmatic access to the configuration: src/main/java/org/acme/config/GreetingResource.java package org.acme.config; import java.util.Optional; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import jakarta.ws.rs.core.MediaType; import org.eclipse.microprofile.config.Config; import org.eclipse.microprofile.config.ConfigProvider; import org.eclipse.microprofile.config.inject.ConfigProperty; @Path("/greeting") public class GreetingResource { @GET @Produces(MediaType.TEXT_PLAIN) public String hello() { Config config = ConfigProvider.getConfig(); String message = config.getValue("greeting.message", String.class); String suffix = config.getOptionalValue("greeting.suffix", String.class).orElse("!"); Optional<String> name = config.getOptionalValue("greeting.name", String.class); return message + " " + name.orElse("world") + suffix; } } 1.8. Property expressions You can combine property references and text strings into property expressions and use these expressions as values in your Quarkus configuration. Like variables, property expressions substitute configuration property values dynamically, avoiding hard-coded values. You can expand an expression in one configuration source with a value defined in another. The application resolves a property expression when java.util.Properties reads the property value from a configuration source: at compile time if read then, and at runtime if overridden at that point. If the application cannot resolve the value of a property in an expression, and the property does not have a default value, your application throws a NoSuchElementException error. 1.8.1. Example usage of property expressions To achieve flexibility when you configure your Quarkus application, you can use property expressions as shown in the following examples. Substituting the value of a configuration property: To avoid hardcoding property values in your configuration, you can use a property expression. Use the USD{<property_name>} syntax to write an expression that references a configuration property, as shown in the following example: Example application.properties file remote.host=quarkus.io callable.url=https://USD{remote.host}/ The value of the callable.url property resolves to https://quarkus.io/ . Setting a property value that is specific to a particular configuration profile: In the following example, the %dev configuration profile and the default configuration profile are set to use data source connection URLs with different host addresses. Example application.properties file %dev.quarkus.datasource.jdbc.url=jdbc:mysql://localhost:3306/mydatabase?useSSL=false quarkus.datasource.jdbc.url=jdbc:mysql://remotehost:3306/mydatabase?useSSL=false Depending on the configuration profile used to start your application, your data source driver uses the database URL that you set for the profile. You can achieve the same result in a simplified way by setting a different value for the custom application.server property for each configuration profile. Then, you can reference the property in the database connection URL of your application, as shown in the following example: Example application.properties file %dev.application.server=localhost application.server=remotehost quarkus.datasource.jdbc.url=jdbc:mysql://USD{application.server}:3306/mydatabase?useSSL=false The application.server property resolves to the appropriate value depending on the profile that you choose when you run your application. Setting a default value of a property expression: You can define a default value for a property expression. Quarkus uses the default value if the value of the property that is required to expand the expression is not resolved from any of your configuration sources. You can set a default value for an expression by using the following syntax: In the following example, the property expression in the data source URL uses mysql.db.server as the default value of the application.server property: Example application.properties file quarkus.datasource.jdbc.url=jdbc:mysql://USD{application.server:mysql.db.server}:3306/mydatabase?useSSL=false Nesting property expressions: You can compose property expressions by nesting a property expression inside another property expression. When nested property expressions are expanded, the inner expression is expanded first. You can use the following syntax for nesting property expressions: Combining multiple property expressions: You can join two or more property expressions together by using the following syntax: Combining property expressions with environment variables: You can use property expressions to substitute the values of environment variables. The expression in the following example substitutes the value that is set for the HOST environment variable as the value of the application.host property: Example application.properties file remote.host=quarkus.io application.host=USD{HOST:USD{remote.host}} When the HOST environment variable is not set, the application.host property uses the value of the remote.host property as the default. 1.9. Using configuration profiles You can use different configuration profiles depending on your environment. With configuration profiles, you can have multiple configurations in the same file and to select between them by using a profile name. Red Hat build of Quarkus has the following three default configuration profiles: dev : Activated in development mode test : Activated when running tests prod : The default profile when not running in development or test mode Note In addition, you can create your own custom profiles. Prerequisites You have a Quarkus Maven project. Procedure Open your Java resource file and add the following import statement: import io.quarkus.runtime.configuration.ConfigUtils; To get a List of the current profiles, add a log by invoking the ConfigUtils.getProfiles() method: LOGGER.infof("The application is starting with profiles `%s`", ConfigUtils.getProfiles()); Additional resources For more information about the use of logging APIs, configuring logging output, and using logging adapters for unified output, see Logging configuration . 1.9.1. Setting a custom configuration profile You can create as many configuration profiles as you want. You can have multiple configurations in the same file and you can select a configuration by using a profile name. Procedure To set a custom profile, create a configuration property with the profile name in the application.properties file, where <property_name> is the name of the property, <value> is the property value, and <profile> is the name of a profile: Create a configuration property %<profile>.<property_name>=<value> In the following example configuration, the value of quarkus.http.port is 9090 by default, and becomes 8181 when the dev profile is activated: Example configuration quarkus.http.port=9090 %dev.quarkus.http.port=8181 Use one of the following methods to enable a profile: Set the quarkus.profile system property. To enable a profile by using the quarkus.profile system property, enter the following command: Enable a profile by using quarkus.profile property mvn -Dquarkus.profile=<value> quarkus:dev Set the QUARKUS_PROFILE environment variable. To enable profile by using an environment variable, enter the following command: Enable a profile by using an environment variable export QUARKUS_PROFILE=<profile> Note The system property value takes precedence over the environment variable value. To repackage the application and change the profile, enter the following command: Change a profile ./mvnw package -Dquarkus.profile=<profile> java -jar target/quarkus-app/quarkus-run.jar The following example shows a command that activates the prod-aws profile: Example command to activate a profile ./mvnw package -Dquarkus.profile=prod-aws java -jar target/quarkus-app/quarkus-run.jar Note The default Quarkus application runtime profile is set to the profile that is used to build the application. Red Hat build of Quarkus automatically selects a profile depending on your environment mode. For example, when your application is running as a JAR, Quarkus is in prod mode. 1.10. Setting custom configuration sources By default, a Quarkus application reads properties from the application.properties file in the src/main/resources subdirectory of your project. With Quarkus, you can load application configuration properties from other sources according to the MicroProfile Config specification for externalized configuration. You can enable your application to load configuration properties from other sources by defining classes that implement the org.eclipse.microprofile.config.spi.ConfigSource and the org.eclipse.microprofile.config.spi.ConfigSourceProvider interfaces. This procedure demonstrates how you can implement a custom configuration source in your Quarkus project. Prerequisite You have the Quarkus config-quickstart project. Note For a completed example of that project, download the Quarkus Quickstarts archive or clone the Quarkus Quickstarts Git repository and go to the config-quickstart directory. Procedure In your project, create a new class that implements the org.eclipse.microprofile.config.spi.ConfigSourceProvider interface. Override the getConfigSources() method to return a list of your custom ConfigSource objects. Example org.acme.config.InMemoryConfigSourceProvider package org.acme.config; import org.eclipse.microprofile.config.spi.ConfigSource; import org.eclipse.microprofile.config.spi.ConfigSourceProvider; import java.util.List; public class InMemoryConfigSourceProvider implements ConfigSourceProvider { @Override public Iterable<ConfigSource> getConfigSources(ClassLoader classLoader) { return List.of(new InMemoryConfigSource()); } } To define your custom configuration source, create an InMemoryConfigSource class that implements the org.eclipse.microprofile.config.spi.ConfigSource interface: Example org.acme.config.InMemoryConfigSource package org.acme.config; import org.eclipse.microprofile.config.spi.ConfigSource; import java.util.HashMap; import java.util.Map; import java.util.Set; public class InMemoryConfigSource implements ConfigSource { private static final Map<String, String> configuration = new HashMap<>(); static { configuration.put("my.prop", "1234"); } @Override public int getOrdinal() { 1 return 275; } @Override public Set<String> getPropertyNames() { return configuration.keySet(); } @Override public String getValue(final String propertyName) { return configuration.get(propertyName); } @Override public String getName() { return InMemoryConfigSource.class.getSimpleName(); } } 1 The getOrdinal() method returns the priority of the ConfigSource class. Therefore, when multiple configuration sources define the same property, Quarkus can select the appropriate value as defined by the ConfigSource class with the highest priority. In the src/main/resources/META-INF/services/ subdirectory of your project, create a file named org.eclipse.microprofile.config.spi.ConfigSourceProvider and enter the fully-qualified name of the class that implements the ConfigSourceProvider in the file that you created: Example org.eclipse.microprofile.config.spi.ConfigSourceProvider file: org.acme.config.InMemoryConfigSourceProvider To ensure that the ConfigSourceProvider that you created is registered and installed when you compile and start your application, you must complete the step. Edit the GreetingResource.java file in your project to add the following update: @ConfigProperty(name="my.prop") int value; In the GreetingResource.java file, extend the hello method to use the new property: @GET @Produces(MediaType.TEXT_PLAIN) public String hello() { return message + " " + name.orElse("world") + " " + value; } Compile and start your application in development mode: ./mvnw quarkus:dev Open a terminal window and send a request to the /greeting endpoint: curl http://localhost:8080/greeting Verify that your application has read the custom configuration and returned the expected message: hello world 1234 1.11. Using custom configuration converters as configuration values You can store custom types as configuration values by implementing org.eclipse.microprofile.config.spi.Converter<T> and adding its fully qualified class name into the META-INF/services/org.eclipse.microprofile.config.spi.Converter file. By using converters, you can transform the string representation of a value into an object. Prerequisites You have created the Quarkus config-quickstart project. Procedure In the org.acme.config package, create the org.acme.config.MyCustomValue class with the following content: Example of custom configuration value package org.acme.config; public class MyCustomValue { private final int value; public MyCustomValue(Integer value) { this.value = value; } public int value() { return value; } } Implement the converter class to override the convert method to produce a MyCustomValue instance. Example implementation of converter class package org.acme.config; import org.eclipse.microprofile.config.spi.Converter; public class MyCustomValueConverter implements Converter<MyCustomValue> { @Override public MyCustomValue convert(String value) { return new MyCustomValue(Integer.valueOf(value)); } } Add the fully-qualified class name of the converter, org.acme.config.MyCustomValueConverter , to your META-INF/services/org.eclipse.microprofile.config.spi.Converter service file. In the GreetingResource.java file, inject the MyCustomValue property: @ConfigProperty(name="custom") MyCustomValue value; Edit the hello method to use this value: @GET @Produces(MediaType.TEXT_PLAIN) public String hello() { return message + " " + name.orElse("world") + " - " + value.value(); } In the application.properties file, add the string representation to be converted: custom=1234 To compile and start your application in development mode, enter the following command: ./mvnw quarkus:dev To verify that the /greeting endpoint returns the expected message, open a terminal window and enter the following command: curl http://localhost:8080/greeting When your application successfully reads the custom configuration, the command returns the following response: hello world - 1234 Note Your custom converter class must be public and must have a public no-argument constructor. Your custom converter class cannot be abstract . Additional resources: List of converters in the microprofile-config GitHub repository 1.11.1. Setting custom converters priority The default priority for all Quarkus core converters is 200. For all other converters, the default priority is 100. You can increase the priority of your custom converters by using the jakarta.annotation.Priority annotation. The following procedure demonstrates an implementation of a custom converter, AnotherCustomValueConverter , which has a priority of 150. This takes precedence over MyCustomValueConverter from the section, which has a default priority of 100. Prerequisites You have created the Quarkus config-quickstart project. You have created a custom configuration converter for your application. Procedure Set a priority for your custom converter by annotating the class with the @Priority annotation and passing it a priority value. In the following example, the priority value is set to 150 . Example AnotherCustomValueConverter.java file package org.acme.config; import jakarta.annotation.Priority; import org.eclipse.microprofile.config.spi.Converter; @Priority(150) public class AnotherCustomValueConverter implements Converter<MyCustomValue> { @Override public MyCustomValue convert(String value) { return new MyCustomValue(Integer.valueOf(value)); } } Create a file named org.eclipse.microprofile.config.spi.Converter in the src/main/resources/META-INF/services/ subdirectory of your project, and enter the fully qualified name of the class that implements the Converter in the file that you created: Example org.eclipse.microprofile.config.spi.Converter file org.acme.config.AnotherCustomValueConverter You must complete the step to ensure that the Converter you created is registered and installed when you compile and start your application. Verification After you complete the required configuration, the step is to compile and package your Quarkus application. For more information and examples, see the compiling and packaging sections of the Getting started with Quarkus guide. 1.12. Additional resources Developing and compiling your Quarkus applications with Apache Maven Deploying your Quarkus applications to OpenShift Container Platform Compiling your Quarkus applications to native executables Revised on 2025-02-28 13:35:49 UTC
|
[
"mvn --version",
"export PATH=USDPATH:<path_to_JDK>",
"mvn com.redhat.quarkus.platform:quarkus-maven-plugin:3.15.3.SP1-redhat-00002:create -DprojectGroupId=org.acme -DprojectArtifactId=config-quickstart -DplatformGroupId=com.redhat.quarkus.platform -DplatformVersion=3.15.3.SP1-redhat-00002 -DclassName=\"org.acme.config.GreetingResource\" -Dpath=\"/greeting\" cd config-quickstart",
"<property_name>=<value>",
"greeting.message=hello greeting.name=quarkus",
"package org.acme.config; import java.util.Optional; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import jakarta.ws.rs.core.MediaType; import org.eclipse.microprofile.config.inject.ConfigProperty; @Path(\"/greeting\") public class GreetingResource { String message; Optional<String> name; String suffix; @GET @Produces(MediaType.TEXT_PLAIN) public String hello() { return message + \" \" + name.orElse(\"world\") + suffix; } }",
"package org.acme.config; import java.util.Optional; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import jakarta.ws.rs.core.MediaType; import org.eclipse.microprofile.config.inject.ConfigProperty; @Path(\"/greeting\") public class GreetingResource { @ConfigProperty(name = \"greeting.message\") 1 String message; @ConfigProperty(name = \"greeting.suffix\", defaultValue=\"!\") 2 String suffix; @ConfigProperty(name = \"greeting.name\") Optional<String> name; 3 @GET @Produces(MediaType.TEXT_PLAIN) public String hello() { return message + \" \" + name.orElse(\"world\") + suffix; } }",
"./mvnw quarkus:dev",
"curl http://localhost:8080/greeting",
"hello quarkus!",
"package org.acme.config; import io.quarkus.test.junit.QuarkusTest; import org.junit.jupiter.api.Test; import static io.restassured.RestAssured.given; import static org.hamcrest.CoreMatchers.is; @QuarkusTest public class GreetingResourceTest { @Test public void testHelloEndpoint() { given() .when().get(\"/greeting\") .then() .statusCode(200) .body(is(\"hello quarkus!\")); // Modified line } }",
"./mvnw quarkus:dev",
"./mvnw clean package",
"java -D<property_name>=<value> -jar target/quarkus-app/quarkus-run.jar",
"java -Dgreeting.suffix=? -jar target/quarkus-app/quarkus-run.jar",
"export <property_name>=<value> ; java -jar target/quarkus-app/quarkus-run.jar",
"<PROPERTY_NAME>=<value>",
"package org.acme.config; import java.util.Optional; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import jakarta.ws.rs.core.MediaType; import org.eclipse.microprofile.config.inject.ConfigProperty; @Path(\"/greeting\") public class GreetingResource { @ConfigProperty(name = \"greeting.message\") String message; @ConfigProperty(name = \"greeting.suffix\", defaultValue=\"!\") String suffix; @ConfigProperty(name = \"greeting.name\") Optional<String> name; @GET @Produces(MediaType.TEXT_PLAIN) public String hello() { return message + \" \" + name.orElse(\"world\") + suffix; } }",
"package org.acme.config; import io.smallrye.config.ConfigMapping; import io.smallrye.config.WithDefault; import java.util.Optional; @ConfigMapping(prefix = \"greeting\") 1 public interface GreetingConfiguration { String message(); @WithDefault(\"!\") 2 String suffix(); Optional<String> name(); }",
"package org.acme.config; import jakarta.inject.Inject; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import jakarta.ws.rs.core.MediaType; @Path(\"/greeting\") public class GreetingResource { @Inject GreetingConfiguration config; @GET @Produces(MediaType.TEXT_PLAIN) public String hello() { return config.message() + \" \" + config.name().orElse(\"world\") + config.suffix(); } }",
"./mvnw quarkus:dev",
"curl http://localhost:8080/greeting",
"hello quarkus!",
"package org.acme.config; import java.util.Optional; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import jakarta.ws.rs.core.MediaType; import org.eclipse.microprofile.config.inject.ConfigProperty; @Path(\"/greeting\") public class GreetingResource { @ConfigProperty(name = \"greeting.message\") String message; @ConfigProperty(name = \"greeting.suffix\", defaultValue=\"!\") String suffix; @ConfigProperty(name = \"greeting.name\") Optional<String> name; @GET @Produces(MediaType.TEXT_PLAIN) public String hello() { return message + \" \" + name.orElse(\"world\") + suffix; } }",
"package org.acme.config; import io.smallrye.config.ConfigMapping; import io.smallrye.config.WithDefault; import java.util.Optional; @ConfigMapping(prefix = \"greeting\") public interface GreetingConfiguration { String message(); @WithDefault(\"!\") String suffix(); Optional<String> name(); }",
"package org.acme.config; import io.smallrye.config.ConfigMapping; import io.smallrye.config.WithDefault; import java.util.List; import java.util.Optional; @ConfigMapping(prefix = \"greeting\") public interface GreetingConfiguration { String message(); @WithDefault(\"!\") String suffix(); Optional<String> name(); ContentConfig content(); interface ContentConfig { Integer prizeAmount(); List<String> recipients(); } }",
"greeting.message=hello greeting.name=quarkus greeting.content.prize-amount=10 greeting.content.recipients=Jane,John",
"package org.acme.config; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import jakarta.ws.rs.core.MediaType; import jakarta.inject.Inject; @Path(\"/greeting\") public class GreetingResource { @Inject GreetingConfiguration config; @GET @Produces(MediaType.TEXT_PLAIN) public String hello() { return config.message() + \" \" + config.name().orElse(\"world\") + config.suffix() + \"\\n\" + config.content().recipients() + \" receive total of candies: \" + config.content().prizeAmount(); } }",
"./mvnw quarkus:dev",
"curl http://localhost:8080/greeting",
"hello quarkus! [Jane, John] receive total of candies: 10",
"@ConfigMapping(prefix = \"greeting\") public class GreetingConfiguration { @Size(min = 20) public String message; public String suffix = \"!\"; }",
"String <variable-name> = ConfigProvider.getConfig().getValue(\" <property-name> \", <data-type-class-name> .class);",
"String message = ConfigProvider.getConfig().getValue(\"greeting.message\",String.class);",
"Optional<String> <variable-name> = ConfigProvider.getConfig().getOptionalValue(\" <property-name> \", <data-type-class-name> .class);",
"Optional<String> name = ConfigProvider.getConfig().getOptionalValue(\"greeting.name\", String.class);",
"package org.acme.config; import java.util.Optional; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import jakarta.ws.rs.core.MediaType; import org.eclipse.microprofile.config.Config; import org.eclipse.microprofile.config.ConfigProvider; import org.eclipse.microprofile.config.inject.ConfigProperty; @Path(\"/greeting\") public class GreetingResource { @GET @Produces(MediaType.TEXT_PLAIN) public String hello() { Config config = ConfigProvider.getConfig(); String message = config.getValue(\"greeting.message\", String.class); String suffix = config.getOptionalValue(\"greeting.suffix\", String.class).orElse(\"!\"); Optional<String> name = config.getOptionalValue(\"greeting.name\", String.class); return message + \" \" + name.orElse(\"world\") + suffix; } }",
"remote.host=quarkus.io callable.url=https://USD{remote.host}/",
"%dev.quarkus.datasource.jdbc.url=jdbc:mysql://localhost:3306/mydatabase?useSSL=false quarkus.datasource.jdbc.url=jdbc:mysql://remotehost:3306/mydatabase?useSSL=false",
"%dev.application.server=localhost application.server=remotehost quarkus.datasource.jdbc.url=jdbc:mysql://USD{application.server}:3306/mydatabase?useSSL=false",
"USD{<property_name>:<default_value>}",
"quarkus.datasource.jdbc.url=jdbc:mysql://USD{application.server:mysql.db.server}:3306/mydatabase?useSSL=false",
"USD{<outer_property_name>USD{<inner_property_name>}}",
"USD{<first_property_name>}USD{<second_property_name>}",
"remote.host=quarkus.io application.host=USD{HOST:USD{remote.host}}",
"import io.quarkus.runtime.configuration.ConfigUtils;",
"LOGGER.infof(\"The application is starting with profiles `%s`\", ConfigUtils.getProfiles());",
"%<profile>.<property_name>=<value>",
"quarkus.http.port=9090 %dev.quarkus.http.port=8181",
"mvn -Dquarkus.profile=<value> quarkus:dev",
"export QUARKUS_PROFILE=<profile>",
"./mvnw package -Dquarkus.profile=<profile> java -jar target/quarkus-app/quarkus-run.jar",
"./mvnw package -Dquarkus.profile=prod-aws java -jar target/quarkus-app/quarkus-run.jar",
"package org.acme.config; import org.eclipse.microprofile.config.spi.ConfigSource; import org.eclipse.microprofile.config.spi.ConfigSourceProvider; import java.util.List; public class InMemoryConfigSourceProvider implements ConfigSourceProvider { @Override public Iterable<ConfigSource> getConfigSources(ClassLoader classLoader) { return List.of(new InMemoryConfigSource()); } }",
"package org.acme.config; import org.eclipse.microprofile.config.spi.ConfigSource; import java.util.HashMap; import java.util.Map; import java.util.Set; public class InMemoryConfigSource implements ConfigSource { private static final Map<String, String> configuration = new HashMap<>(); static { configuration.put(\"my.prop\", \"1234\"); } @Override public int getOrdinal() { 1 return 275; } @Override public Set<String> getPropertyNames() { return configuration.keySet(); } @Override public String getValue(final String propertyName) { return configuration.get(propertyName); } @Override public String getName() { return InMemoryConfigSource.class.getSimpleName(); } }",
"org.acme.config.InMemoryConfigSourceProvider",
"@ConfigProperty(name=\"my.prop\") int value;",
"@GET @Produces(MediaType.TEXT_PLAIN) public String hello() { return message + \" \" + name.orElse(\"world\") + \" \" + value; }",
"./mvnw quarkus:dev",
"curl http://localhost:8080/greeting",
"hello world 1234",
"package org.acme.config; public class MyCustomValue { private final int value; public MyCustomValue(Integer value) { this.value = value; } public int value() { return value; } }",
"package org.acme.config; import org.eclipse.microprofile.config.spi.Converter; public class MyCustomValueConverter implements Converter<MyCustomValue> { @Override public MyCustomValue convert(String value) { return new MyCustomValue(Integer.valueOf(value)); } }",
"@ConfigProperty(name=\"custom\") MyCustomValue value;",
"@GET @Produces(MediaType.TEXT_PLAIN) public String hello() { return message + \" \" + name.orElse(\"world\") + \" - \" + value.value(); }",
"custom=1234",
"./mvnw quarkus:dev",
"curl http://localhost:8080/greeting",
"hello world - 1234",
"package org.acme.config; import jakarta.annotation.Priority; import org.eclipse.microprofile.config.spi.Converter; @Priority(150) public class AnotherCustomValueConverter implements Converter<MyCustomValue> { @Override public MyCustomValue convert(String value) { return new MyCustomValue(Integer.valueOf(value)); } }",
"org.acme.config.AnotherCustomValueConverter"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.15/html/configuring_your_red_hat_build_of_quarkus_applications_by_using_a_properties_file/assembly_quarkus-configuration-guide_quarkus-configuration-guide
|
Chapter 38. Failover, load balancing and high availability in Identity Management
|
Chapter 38. Failover, load balancing and high availability in Identity Management Identity Management (IdM) comes with its own failover, load-balancing and high-availability features, for example LDAP identity domain and certificate replication, and service discovery and failover support provided by the System Security Services Daemon (SSSD). IdM is thus equipped with: Client-side failover capability Server-side service availability Client-side failover capability SSSD obtains service (SRV) resource records from DNS servers that the client discovers automatically. Based on the SRV records, SSSD maintains a list of available IdM servers, including the information about the connectivity of these servers. If one IdM server goes offline or is overloaded, SSSD already knows which other server to communicate with. If DNS autodiscovery is not available, IdM clients should be configured at least with a fixed list of IdM servers to retrieve SRV records from in case of a failure. During the installation of an IdM client, the installer searches for _ldap._tcp. DOMAIN DNS SRV records for all domains that are parent to the client's hostname. In this way, the installer retrieves the hostname of the IdM server that is most conveniently located for communicating with the client, and uses its domain to configure the client components. Server-side service availability IdM allows replicating servers in geographically dispersed data centers to shorten the path between IdM clients and the nearest accessible server. Replicating servers allows spreading the load and scaling for more clients. The IdM replication mechanism provides active/active service availability. Services at all IdM replicas are readily available at the same time. Note Trying to combine IdM with other load balancing, HA software is not recommended. Many third-party high availability (HA) solutions assume active/passive scenarios and cause unneeded service interruption to IdM availability. Other solutions use virtual IPs or a single hostname per clustered service. All these methods do not typically work well with the type of service availability provided by the IdM solution. They also integrate very poorly with Kerberos, decreasing the overall security and stability of the deployment. It is also discouraged to deploy other, unrelated services on IdM masters, especially if these services are supposed to be highly available and use solutions that modify networking configuration to provide HA features. For more details about using load balancers when Kerberos is used for authentication, see this blog post .
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/load-balancing
|
2.4. Obtaining Information about Control Groups
|
2.4. Obtaining Information about Control Groups Use the systemctl command to list system units and to view their status. Also, the systemd-cgls command is provided to view the hierarchy of control groups and systemd-cgtop to monitor their resource consumption in real time. 2.4.1. Listing Units Use the following command to list all active units on the system: The list-units option is executed by default, which means that you will receive the same output when you omit this option and execute just: The output displayed above contains five columns: UNIT - the name of the unit that also reflects the unit's position in the cgroup tree. As mentioned in the section called "Systemd Unit Types" , three unit types are relevant for resource control: slice , scope , and service . For a complete list of systemd 's unit types, see the chapter called Managing Services with systemd in Red Hat Enterprise Linux 7 System Administrators Guide . LOAD - indicates whether the unit configuration file was properly loaded. If the unit file failed to load, the field contains the state error instead of loaded . Other unit load states are: stub , merged , and masked . ACTIVE - the high-level unit activation state, which is a generalization of SUB. SUB - the low-level unit activation state. The range of possible values depends on the unit type. DESCRIPTION - the description of the unit's content and functionality. By default, systemctl lists only active units (in terms of high-level activations state in the ACTIVE field). Use the --all option to see inactive units too. To limit the amount of information in the output list, use the --type ( -t ) parameter that requires a comma-separated list of unit types such as service and slice , or unit load states such as loaded and masked . Example 2.8. Using systemctl list-units To view a list of all slices used on the system, type: To list all active masked services, type: To list all unit files installed on your system and their status, type: 2.4.2. Viewing the Control Group Hierarchy The aforementioned listing commands do not go beyond the unit level to show the actual processes running in cgroups. Also, the output of systemctl does not show the hierarchy of units. You can achieve both by using the systemd-cgls command that groups the running process according to cgroups. To display the whole cgroup hierarchy on your system, type: When systemd-cgls is issued without parameters, it returns the entire cgroup hierarchy. The highest level of the cgroup tree is formed by slices and can look as follows: Note that machine slice is present only if you are running a virtual machine or a container. For more information on the cgroup tree, see the section called "Systemd Unit Types" . To reduce the output of systemd-cgls , and to view a specified part of the hierarchy, execute: Replace name with a name of the resource controller you want to inspect. As an alternative, use the systemctl status command to display detailed information about a system unit. A cgroup subtree is a part of the output of this command. To learn more about systemctl status , see the chapter called Managing Services with systemd in Red Hat Enterprise Linux 7 System Administrators Guide . Example 2.9. Viewing the Control Group Hierarchy To see a cgroup tree of the memory resource controller, execute: The output of the above command lists the services that interact with the selected controller. A different approach is to view a part of the cgroup tree for a certain service, slice, or scope unit: Besides the aforementioned tools, systemd also provides the machinectl command dedicated to monitoring Linux containers. 2.4.3. Viewing Resource Controllers The aforementioned systemctl commands enable monitoring the higher-level unit hierarchy, but do not show which resource controllers in Linux kernel are actually used by which processes. This information is stored in dedicated process files, to view it, type as root : Where PID stands for the ID of the process you wish to examine. By default, the list is the same for all units started by systemd , since it automatically mounts all default controllers. See the following example: By examining this file, you can determine if the process has been placed in the correct cgroups as defined by the systemd unit file specifications. 2.4.4. Monitoring Resource Consumption The systemd-cgls command provides a static snapshot of the cgroup hierarchy. To see a dynamic account of currently running cgroups ordered by their resource usage (CPU, Memory, and IO), use: The behavior, provided statistics, and control options of systemd-cgtop are akin of those of the top utility. See systemd-cgtop (1) manual page for more information.
|
[
"~]# systemctl list-units",
"~]USD systemctl UNIT LOAD ACTIVE SUB DESCRIPTION abrt-ccpp.service loaded active exited Install ABRT coredump hook abrt-oops.service loaded active running ABRT kernel log watcher abrt-vmcore.service loaded active exited Harvest vmcores for ABRT abrt-xorg.service loaded active running ABRT Xorg log watcher",
"~]USD systemctl -t slice",
"~]USD systemctl -t service,masked",
"~]USD systemctl list-unit-files",
"~]USD systemd-cgls",
"├─system │ ├─1 /usr/lib/systemd/systemd --switched-root --system --deserialize 20 │ │ ├─user │ ├─user-1000 │ │ └─ │ ├─user-2000 │ │ └─ │ │ └─machine ├─machine-1000 │ └─",
"~]USD systemd-cgls name",
"~]USD systemctl name",
"~]USD systemd-cgls memory memory: ├─ 1 /usr/lib/systemd/systemd --switched-root --system --deserialize 23 ├─ 475 /usr/lib/systemd/systemd-journald",
"~]# systemctl status httpd.service httpd.service - The Apache HTTP Server Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled) Active: active (running) since Sun 2014-03-23 08:01:14 MDT; 33min ago Process: 3385 ExecReload=/usr/sbin/httpd USDOPTIONS -k graceful (code=exited, status=0/SUCCESS) Main PID: 1205 (httpd) Status: \"Total requests: 0; Current requests/sec: 0; Current traffic: 0 B/sec\" CGroup: /system.slice/httpd.service ├─1205 /usr/sbin/httpd -DFOREGROUND ├─3387 /usr/sbin/httpd -DFOREGROUND ├─3388 /usr/sbin/httpd -DFOREGROUND ├─3389 /usr/sbin/httpd -DFOREGROUND ├─3390 /usr/sbin/httpd -DFOREGROUND └─3391 /usr/sbin/httpd -DFOREGROUND",
"~]# cat proc/ PID /cgroup",
"~]# cat proc/ 27 /cgroup 10:hugetlb:/ 9:perf_event:/ 8:blkio:/ 7:net_cls:/ 6:freezer:/ 5:devices:/ 4:memory:/ 3:cpuacct,cpu:/ 2:cpuset:/ 1:name=systemd:/",
"~]# systemd-cgtop"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/resource_management_guide/sec-Obtaining_Information_About_Control_Groups
|
Chapter 6. Managing Kafka
|
Chapter 6. Managing Kafka Use additional configuration properties to maintain a deployment of AMQ Streams. You can add and adjust settings to respond to the performance of AMQ Streams. For example, you can introduce additional configuration to improve throughput and data reliability. 6.1. Tuning Kafka configuration Use configuration properties to optimize the performance of Kafka brokers, producers and consumers. A minimum set of configuration properties is required, but you can add or adjust properties to change how producers and consumers interact with Kafka brokers. For example, you can tune latency and throughput of messages so that clients can respond to data in real time. You might start by analyzing metrics to gauge where to make your initial configurations, then make incremental changes and further comparisons of metrics until you have the configuration you need. For more information about Apache Kafka configuration properties, see the Apache Kafka documentation . 6.1.1. Kafka broker configuration tuning Use configuration properties to optimize the performance of Kafka brokers. You can use standard Kafka broker configuration options, except for properties managed directly by AMQ Streams. 6.1.1.1. Basic broker configuration A basic configuration will include the following properties to identify your brokers and provide secure access: broker.id is the ID of the Kafka broker log.dirs are the directories for log data zookeeper.connect is the configuration to connect Kafka with ZooKeeper listener exposes the Kafka cluster to clients authorization mechanisms allow or decline actions executed by users authentication mechanisms prove the identity of users requiring access to Kafka You can find more details on the basic configuration options in Configuring Kafka . A typical broker configuration will also include settings for properties related to topics, threads and logs. Basic broker configuration properties # ... num.partitions=1 default.replication.factor=3 offsets.topic.replication.factor=3 transaction.state.log.replication.factor=3 transaction.state.log.min.isr=2 log.retention.hours=168 log.segment.bytes=1073741824 log.retention.check.interval.ms=300000 num.network.threads=3 num.io.threads=8 num.recovery.threads.per.data.dir=1 socket.send.buffer.bytes=102400 socket.receive.buffer.bytes=102400 socket.request.max.bytes=104857600 group.initial.rebalance.delay.ms=0 zookeeper.connection.timeout.ms=6000 # ... 6.1.1.2. Replicating topics for high availability Basic topic properties set the default number of partitions and replication factor for topics, which will apply to topics that are created without these properties being explicitly set, including when topics are created automatically. # ... num.partitions=1 auto.create.topics.enable=false default.replication.factor=3 min.insync.replicas=2 replica.fetch.max.bytes=1048576 # ... The auto.create.topics.enable property is enabled by default so that topics that do not already exist are created automatically when needed by producers and consumers. If you are using automatic topic creation, you can set the default number of partitions for topics using num.partitions . Generally, however, this property is disabled so that more control is provided over topics through explicit topic creation For high availability environments, it is advisable to increase the replication factor to at least 3 for topics and set the minimum number of in-sync replicas required to 1 less than the replication factor. For data durability , you should also set min.insync.replicas in your topic configuration and message delivery acknowledgments using acks=all in your producer configuration. Use replica.fetch.max.bytes to set the maximum size, in bytes, of messages fetched by each follower that replicates the leader partition. Change this value according to the average message size and throughput. When considering the total memory allocation required for read/write buffering, the memory available must also be able to accommodate the maximum replicated message size when multiplied by all followers. The size must also be greater than message.max.bytes , so that all messages can be replicated. The delete.topic.enable property is enabled by default to allow topics to be deleted. In a production environment, you should disable this property to avoid accidental topic deletion, resulting in data loss. You can, however, temporarily enable it and delete topics and then disable it again. # ... auto.create.topics.enable=false delete.topic.enable=true # ... 6.1.1.3. Internal topic settings for transactions and commits If you are using transactions to enable atomic writes to partitions from producers, the state of the transactions is stored in the internal __transaction_state topic. By default, the brokers are configured with a replication factor of 3 and a minimum of 2 in-sync replicas for this topic, which means that a minimum of three brokers are required in your Kafka cluster. # ... transaction.state.log.replication.factor=3 transaction.state.log.min.isr=2 # ... Similarly, the internal __consumer_offsets topic, which stores consumer state, has default settings for the number of partitions and replication factor. # ... offsets.topic.num.partitions=50 offsets.topic.replication.factor=3 # ... Do not reduce these settings in production. You can increase the settings in a production environment. As an exception, you might want to reduce the settings in a single-broker test environment. 6.1.1.4. Improving request handling throughput by increasing I/O threads Network threads handle requests to the Kafka cluster, such as produce and fetch requests from client applications. Produce requests are placed in a request queue. Responses are placed in a response queue. The number of network threads should reflect the replication factor and the levels of activity from client producers and consumers interacting with the Kafka cluster. If you are going to have a lot of requests, you can increase the number of threads, using the amount of time threads are idle to determine when to add more threads. To reduce congestion and regulate the request traffic, you can limit the number of requests allowed in the request queue before the network thread is blocked. I/O threads pick up requests from the request queue to process them. Adding more threads can improve throughput, but the number of CPU cores and disk bandwidth imposes a practical upper limit. At a minimum, the number of I/O threads should equal the number of storage volumes. # ... num.network.threads=3 1 queued.max.requests=500 2 num.io.threads=8 3 num.recovery.threads.per.data.dir=1 4 # ... 1 The number of network threads for the Kafka cluster. 2 The number of requests allowed in the request queue. 3 The number of I/O threads for a Kafka broker. 4 The number of threads used for log loading at startup and flushing at shutdown. Configuration updates to the thread pools for all brokers might occur dynamically at the cluster level. These updates are restricted to between half the current size and twice the current size. Note Kafka broker metrics can help with working out the number of threads required. For example, metrics for the average time network threads are idle ( kafka.network:type=SocketServer,name=NetworkProcessorAvgIdlePercent ) indicate the percentage of resources used. If there is 0% idle time, all resources are in use, which means that adding more threads might be beneficial. If threads are slow or limited due to the number of disks, you can try increasing the size of the buffers for network requests to improve throughput: # ... replica.socket.receive.buffer.bytes=65536 # ... And also increase the maximum number of bytes Kafka can receive: # ... socket.request.max.bytes=104857600 # ... 6.1.1.5. Increasing bandwidth for high latency connections Kafka batches data to achieve reasonable throughput over high-latency connections from Kafka to clients, such as connections between datacenters. However, if high latency is a problem, you can increase the size of the buffers for sending and receiving messages. # ... socket.send.buffer.bytes=1048576 socket.receive.buffer.bytes=1048576 # ... You can estimate the optimal size of your buffers using a bandwidth-delay product calculation, which multiplies the maximum bandwidth of the link (in bytes/s) with the round-trip delay (in seconds) to give an estimate of how large a buffer is required to sustain maximum throughput. 6.1.1.6. Managing logs with data retention policies Kafka uses logs to store message data. Logs are a series of segments associated with various indexes. New messages are written to an active segment, and never subsequently modified. Segments are read when serving fetch requests from consumers. Periodically, the active segment is rolled to become read-only and a new active segment is created to replace it. There is only a single segment active at a time. Older segments are retained until they are eligible for deletion. Configuration at the broker level sets the maximum size in bytes of a log segment and the amount of time in milliseconds before an active segment is rolled: # ... log.segment.bytes=1073741824 log.roll.ms=604800000 # ... You can override these settings at the topic level using segment.bytes and segment.ms . Whether you need to lower or raise these values depends on the policy for segment deletion. A larger size means the active segment contains more messages and is rolled less often. Segments also become eligible for deletion less often. You can set time-based or size-based log retention and cleanup policies so that logs are kept manageable. Depending on your requirements, you can use log retention configuration to delete old segments. If log retention policies are used, non-active log segments are removed when retention limits are reached. Deleting old segments bounds the storage space required for the log so you do not exceed disk capacity. For time-based log retention, you set a retention period based on hours, minutes and milliseconds. The retention period is based on the time messages were appended to the segment. The milliseconds configuration has priority over minutes, which has priority over hours. The minutes and milliseconds configuration is null by default, but the three options provide a substantial level of control over the data you wish to retain. Preference should be given to the milliseconds configuration, as it is the only one of the three properties that is dynamically updateable. # ... log.retention.ms=1680000 # ... If log.retention.ms is set to -1, no time limit is applied to log retention, so all logs are retained. Disk usage should always be monitored, but the -1 setting is not generally recommended as it can lead to issues with full disks, which can be hard to rectify. For size-based log retention, you set a maximum log size (of all segments in the log) in bytes: # ... log.retention.bytes=1073741824 # ... In other words, a log will typically have approximately log.retention.bytes/log.segment.bytes segments once it reaches a steady state. When the maximum log size is reached, older segments are removed. A potential issue with using a maximum log size is that it does not take into account the time messages were appended to a segment. You can use time-based and size-based log retention for your cleanup policy to get the balance you need. Whichever threshold is reached first triggers the cleanup. If you wish to add a time delay before a segment file is deleted from the system, you can add the delay using log.segment.delete.delay.ms for all topics at the broker level or file.delete.delay.ms for specific topics in the topic configuration. # ... log.segment.delete.delay.ms=60000 # ... 6.1.1.7. Removing log data with cleanup policies The method of removing older log data is determined by the log cleaner configuration. The log cleaner is enabled for the broker by default: # ... log.cleaner.enable=true # ... You can set the cleanup policy at the topic or broker level. Broker-level configuration is the default for topics that do not have policy set. You can set policy to delete logs, compact logs, or do both: # ... log.cleanup.policy=compact,delete # ... The delete policy corresponds to managing logs with data retention policies. It is suitable when data does not need to be retained forever. The compact policy guarantees to keep the most recent message for each message key. Log compaction is suitable where message values are changeable, and you want to retain the latest update. If cleanup policy is set to delete logs, older segments are deleted based on log retention limits. Otherwise, if the log cleaner is not enabled, and there are no log retention limits, the log will continue to grow. If cleanup policy is set for log compaction, the head of the log operates as a standard Kafka log, with writes for new messages appended in order. In the tail of a compacted log, where the log cleaner operates, records will be deleted if another record with the same key occurs later in the log. Messages with null values are also deleted. If you're not using keys, you can't use compaction because keys are needed to identify related messages. While Kafka guarantees that the latest messages for each key will be retained, it does not guarantee that the whole compacted log will not contain duplicates. Figure 6.1. Log showing key value writes with offset positions before compaction Using keys to identify messages, Kafka compaction keeps the latest message (with the highest offset) for a specific message key, eventually discarding earlier messages that have the same key. In other words, the message in its latest state is always available and any out-of-date records of that particular message are eventually removed when the log cleaner runs. You can restore a message back to a state. Records retain their original offsets even when surrounding records get deleted. Consequently, the tail can have non-contiguous offsets. When consuming an offset that's no longer available in the tail, the record with the higher offset is found. Figure 6.2. Log after compaction If you choose only a compact policy, your log can still become arbitrarily large. In which case, you can set policy to compact and delete logs. If you choose to compact and delete, first the log data is compacted, removing records with a key in the head of the log. After which, data that falls before the log retention threshold is deleted. Figure 6.3. Log retention point and compaction point You set the frequency the log is checked for cleanup in milliseconds: # ... log.retention.check.interval.ms=300000 # ... Adjust the log retention check interval in relation to the log retention settings. Smaller retention sizes might require more frequent checks. The frequency of cleanup should be often enough to manage the disk space, but not so often it affects performance on a topic. You can also set a time in milliseconds to put the cleaner on standby if there are no logs to clean: # ... log.cleaner.backoff.ms=15000 # ... If you choose to delete older log data, you can set a period in milliseconds to retain the deleted data before it is purged: # ... log.cleaner.delete.retention.ms=86400000 # ... The deleted data retention period gives time to notice the data is gone before it is irretrievably deleted. To delete all messages related to a specific key, a producer can send a tombstone message. A tombstone has a null value and acts as a marker to tell a consumer the value is deleted. After compaction, only the tombstone is retained, which must be for a long enough period for the consumer to know that the message is deleted. When older messages are deleted, having no value, the tombstone key is also deleted from the partition. 6.1.1.8. Managing disk utilization There are many other configuration settings related to log cleanup, but of particular importance is memory allocation. The deduplication property specifies the total memory for cleanup across all log cleaner threads. You can set an upper limit on the percentage of memory used through the buffer load factor. # ... log.cleaner.dedupe.buffer.size=134217728 log.cleaner.io.buffer.load.factor=0.9 # ... Each log entry uses exactly 24 bytes, so you can work out how many log entries the buffer can handle in a single run and adjust the setting accordingly. If possible, consider increasing the number of log cleaner threads if you are looking to reduce the log cleaning time: # ... log.cleaner.threads=8 # ... If you are experiencing issues with 100% disk bandwidth usage, you can throttle the log cleaner I/O so that the sum of the read/write operations is less than a specified double value based on the capabilities of the disks performing the operations: # ... log.cleaner.io.max.bytes.per.second=1.7976931348623157E308 # ... 6.1.1.9. Handling large message sizes The default batch size for messages is 1MB, which is optimal for maximum throughput in most use cases. Kafka can accommodate larger batches at a reduced throughput, assuming adequate disk capacity. Large message sizes are handled in four ways: Producer-side message compression writes compressed messages to the log. Reference-based messaging sends only a reference to data stored in some other system in the message's value. Inline messaging splits messages into chunks that use the same key, which are then combined on output using a stream-processor like Kafka Streams. Broker and producer/consumer client application configuration built to handle larger message sizes. The reference-based messaging and message compression options are recommended and cover most situations. With any of these options, care must be take to avoid introducing performance issues. Producer-side compression For producer configuration, you specify a compression.type , such as Gzip, which is then applied to batches of data generated by the producer. Using the broker configuration compression.type=producer , the broker retains whatever compression the producer used. Whenever producer and topic compression do not match, the broker has to compress batches again prior to appending them to the log, which impacts broker performance. Compression also adds additional processing overhead on the producer and decompression overhead on the consumer, but includes more data in a batch, so is often beneficial to throughput when message data compresses well. Combine producer-side compression with fine-tuning of the batch size to facilitate optimum throughput. Using metrics helps to gauge the average batch size needed. Reference-based messaging Reference-based messaging is useful for data replication when you do not know how big a message will be. The external data store must be fast, durable, and highly available for this configuration to work. Data is written to the data store and a reference to the data is returned. The producer sends a message containing the reference to Kafka. The consumer gets the reference from the message and uses it to fetch the data from the data store. Figure 6.4. Reference-based messaging flow As the message passing requires more trips, end-to-end latency will increase. Another significant drawback of this approach is there is no automatic clean up of the data in the external system when the Kafka message gets cleaned up. A hybrid approach would be to only send large messages to the data store and process standard-sized messages directly. Inline messaging Inline messaging is complex, but it does not have the overhead of depending on external systems like reference-based messaging. The producing client application has to serialize and then chunk the data if the message is too big. The producer then uses the Kafka ByteArraySerializer or similar to serialize each chunk again before sending it. The consumer tracks messages and buffers chunks until it has a complete message. The consuming client application receives the chunks, which are assembled before deserialization. Complete messages are delivered to the rest of the consuming application in order according to the offset of the first or last chunk for each set of chunked messages. Successful delivery of the complete message is checked against offset metadata to avoid duplicates during a rebalance. Figure 6.5. Inline messaging flow Inline messaging has a performance overhead on the consumer side because of the buffering required, particularly when handling a series of large messages in parallel. The chunks of large messages can become interleaved, so that it is not always possible to commit when all the chunks of a message have been consumed if the chunks of another large message in the buffer are incomplete. For this reason, the buffering is usually supported by persisting message chunks or by implementing commit logic. Configuration to handle larger messages If larger messages cannot be avoided, and to avoid blocks at any point of the message flow, you can increase message limits. To do this, configure message.max.bytes at the topic level to set the maximum record batch size for individual topics. If you set message.max.bytes at the broker level, larger messages are allowed for all topics. The broker will reject any message that is greater than the limit set with message.max.bytes . The buffer size for the producers ( max.request.size ) and consumers ( message.max.bytes ) must be able to accommodate the larger messages. 6.1.1.10. Controlling the log flush of message data Log flush properties control the periodic writes of cached message data to disk. The scheduler specifies the frequency of checks on the log cache in milliseconds: # ... log.flush.scheduler.interval.ms=2000 # ... You can control the frequency of the flush based on the maximum amount of time that a message is kept in-memory and the maximum number of messages in the log before writing to disk: # ... log.flush.interval.ms=50000 log.flush.interval.messages=100000 # ... The wait between flushes includes the time to make the check and the specified interval before the flush is carried out. Increasing the frequency of flushes can affect throughput. Generally, the recommendation is to not set explicit flush thresholds and let the operating system perform background flush using its default settings. Partition replication provides greater data durability than writes to any single disk as a failed broker can recover from its in-sync replicas. If you are using application flush management, setting lower flush thresholds might be appropriate if you are using faster disks. 6.1.1.11. Partition rebalancing for availability Partitions can be replicated across brokers for fault tolerance. For a given partition, one broker is elected leader and handles all produce requests (writes to the log). Partition followers on other brokers replicate the partition data of the partition leader for data reliability in the event of the leader failing. Followers do not normally serve clients, though broker.rack allows a consumer to consume messages from the closest replica when a Kafka cluster spans multiple datacenters. Followers operate only to replicate messages from the partition leader and allow recovery should the leader fail. Recovery requires an in-sync follower. Followers stay in sync by sending fetch requests to the leader, which returns messages to the follower in order. The follower is considered to be in sync if it has caught up with the most recently committed message on the leader. The leader checks this by looking at the last offset requested by the follower. An out-of-sync follower is usually not eligible as a leader should the current leader fail, unless unclean leader election is allowed . You can adjust the lag time before a follower is considered out of sync: # ... replica.lag.time.max.ms=30000 # ... Lag time puts an upper limit on the time to replicate a message to all in-sync replicas and how long a producer has to wait for an acknowledgment. If a follower fails to make a fetch request and catch up with the latest message within the specified lag time, it is removed from in-sync replicas. You can reduce the lag time to detect failed replicas sooner, but by doing so you might increase the number of followers that fall out of sync needlessly. The right lag time value depends on both network latency and broker disk bandwidth. When a leader partition is no longer available, one of the in-sync replicas is chosen as the new leader. The first broker in a partition's list of replicas is known as the preferred leader. By default, Kafka is enabled for automatic partition leader rebalancing based on a periodic check of leader distribution. That is, Kafka checks to see if the preferred leader is the current leader. A rebalance ensures that leaders are evenly distributed across brokers and brokers are not overloaded. You can use Cruise Control for AMQ Streams to figure out replica assignments to brokers that balance load evenly across the cluster. Its calculation takes into account the differing load experienced by leaders and followers. A failed leader affects the balance of a Kafka cluster because the remaining brokers get the extra work of leading additional partitions. For the assignment found by Cruise Control to actually be balanced it is necessary that partitions are lead by the preferred leader. Kafka can automatically ensure that the preferred leader is being used (where possible), changing the current leader if necessary. This ensures that the cluster remains in the balanced state found by Cruise Control. You can control the frequency, in seconds, of the rebalance check and the maximum percentage of imbalance allowed for a broker before a rebalance is triggered. #... auto.leader.rebalance.enable=true leader.imbalance.check.interval.seconds=300 leader.imbalance.per.broker.percentage=10 #... The percentage leader imbalance for a broker is the ratio between the current number of partitions for which the broker is the current leader and the number of partitions for which it is the preferred leader. You can set the percentage to zero to ensure that preferred leaders are always elected, assuming they are in sync. If the checks for rebalances need more control, you can disable automated rebalances. You can then choose when to trigger a rebalance using the kafka-leader-election.sh command line tool. Note The Grafana dashboards provided with AMQ Streams show metrics for under-replicated partitions and partitions that do not have an active leader. 6.1.1.12. Unclean leader election Leader election to an in-sync replica is considered clean because it guarantees no loss of data. And this is what happens by default. But what if there is no in-sync replica to take on leadership? Perhaps the ISR (in-sync replica) only contained the leader when the leader's disk died. If a minimum number of in-sync replicas is not set, and there are no followers in sync with the partition leader when its hard drive fails irrevocably, data is already lost. Not only that, but a new leader cannot be elected because there are no in-sync followers. You can configure how Kafka handles leader failure: # ... unclean.leader.election.enable=false # ... Unclean leader election is disabled by default, which means that out-of-sync replicas cannot become leaders. With clean leader election, if no other broker was in the ISR when the old leader was lost, Kafka waits until that leader is back online before messages can be written or read. Unclean leader election means out-of-sync replicas can become leaders, but you risk losing messages. The choice you make depends on whether your requirements favor availability or durability. You can override the default configuration for specific topics at the topic level. If you cannot afford the risk of data loss, then leave the default configuration. 6.1.1.13. Avoiding unnecessary consumer group rebalances For consumers joining a new consumer group, you can add a delay so that unnecessary rebalances to the broker are avoided: # ... group.initial.rebalance.delay.ms=3000 # ... The delay is the amount of time that the coordinator waits for members to join. The longer the delay, the more likely it is that all the members will join in time and avoid a rebalance. But the delay also prevents the group from consuming until the period has ended. 6.1.2. Kafka producer configuration tuning Use a basic producer configuration with optional properties that are tailored to specific use cases. Adjusting your configuration to maximize throughput might increase latency or vice versa. You will need to experiment and tune your producer configuration to get the balance you need. 6.1.2.1. Basic producer configuration Connection and serializer properties are required for every producer. Generally, it is good practice to add a client id for tracking, and use compression on the producer to reduce batch sizes in requests. In a basic producer configuration: The order of messages in a partition is not guaranteed. The acknowledgment of messages reaching the broker does not guarantee durability. Basic producer configuration properties # ... bootstrap.servers=localhost:9092 1 key.serializer=org.apache.kafka.common.serialization.StringSerializer 2 value.serializer=org.apache.kafka.common.serialization.StringSerializer 3 client.id=my-client 4 compression.type=gzip 5 # ... 1 (Required) Tells the producer to connect to a Kafka cluster using a host:port bootstrap server address for a Kafka broker. The producer uses the address to discover and connect to all brokers in the cluster. Use a comma-separated list to specify two or three addresses in case a server is down, but it's not necessary to provide a list of all the brokers in the cluster. 2 (Required) Serializer to transform the key of each message to bytes prior to them being sent to a broker. 3 (Required) Serializer to transform the value of each message to bytes prior to them being sent to a broker. 4 (Optional) The logical name for the client, which is used in logs and metrics to identify the source of a request. 5 (Optional) The codec for compressing messages, which are sent and might be stored in compressed format and then decompressed when reaching a consumer. Compression is useful for improving throughput and reducing the load on storage, but might not be suitable for low latency applications where the cost of compression or decompression could be prohibitive. 6.1.2.2. Data durability You can apply greater data durability, to minimize the likelihood that messages are lost, using message delivery acknowledgments. # ... acks=all 1 # ... 1 Specifying acks=all forces a partition leader to replicate messages to a certain number of followers before acknowledging that the message request was successfully received. Because of the additional checks, acks=all increases the latency between the producer sending a message and receiving acknowledgment. The number of brokers which need to have appended the messages to their logs before the acknowledgment is sent to the producer is determined by the topic's min.insync.replicas configuration. A typical starting point is to have a topic replication factor of 3, with two in-sync replicas on other brokers. In this configuration, the producer can continue unaffected if a single broker is unavailable. If a second broker becomes unavailable, the producer won't receive acknowledgments and won't be able to produce more messages. Topic configuration to support acks=all # ... min.insync.replicas=2 1 # ... 1 Use 2 in-sync replicas. The default is 1 . Note If the system fails, there is a risk of unsent data in the buffer being lost. 6.1.2.3. Ordered delivery Idempotent producers avoid duplicates as messages are delivered exactly once. IDs and sequence numbers are assigned to messages to ensure the order of delivery, even in the event of failure. If you are using acks=all for data consistency, enabling idempotency makes sense for ordered delivery. Ordered delivery with idempotency # ... enable.idempotence=true 1 max.in.flight.requests.per.connection=5 2 acks=all 3 retries=2147483647 4 # ... 1 Set to true to enable the idempotent producer. 2 With idempotent delivery the number of in-flight requests may be greater than 1 while still providing the message ordering guarantee. The default is 5 in-flight requests. 3 Set acks to all . 4 Set the number of attempts to resend a failed message request. If you are not using acks=all and idempotency because of the performance cost, set the number of in-flight (unacknowledged) requests to 1 to preserve ordering. Otherwise, a situation is possible where Message-A fails only to succeed after Message-B was already written to the broker. Ordered delivery without idempotency # ... enable.idempotence=false 1 max.in.flight.requests.per.connection=1 2 retries=2147483647 # ... 1 Set to false to disable the idempotent producer. 2 Set the number of in-flight requests to exactly 1 . 6.1.2.4. Reliability guarantees Idempotence is useful for exactly once writes to a single partition. Transactions, when used with idempotence, allow exactly once writes across multiple partitions. Transactions guarantee that messages using the same transactional ID are produced once, and either all are successfully written to the respective logs or none of them are. # ... enable.idempotence=true max.in.flight.requests.per.connection=5 acks=all retries=2147483647 transactional.id= UNIQUE-ID 1 transaction.timeout.ms=900000 2 # ... 1 Specify a unique transactional ID. 2 Set the maximum allowed time for transactions in milliseconds before a timeout error is returned. The default is 900000 or 15 minutes. The choice of transactional.id is important in order that the transactional guarantee is maintained. Each transactional id should be used for a unique set of topic partitions. For example, this can be achieved using an external mapping of topic partition names to transactional ids, or by computing the transactional id from the topic partition names using a function that avoids collisions. 6.1.2.5. Optimizing throughput and latency Usually, the requirement of a system is to satisfy a particular throughput target for a proportion of messages within a given latency. For example, targeting 500,000 messages per second with 95% of messages being acknowledged within 2 seconds. It's likely that the messaging semantics (message ordering and durability) of your producer are defined by the requirements for your application. For instance, it's possible that you don't have the option of using acks=0 or acks=1 without breaking some important property or guarantee provided by your application. Broker restarts have a significant impact on high percentile statistics. For example, over a long period the 99th percentile latency is dominated by behavior around broker restarts. This is worth considering when designing benchmarks or comparing performance numbers from benchmarking with performance numbers seen in production. Depending on your objective, Kafka offers a number of configuration parameters and techniques for tuning producer performance for throughput and latency. Message batching ( linger.ms and batch.size ) Message batching delays sending messages in the hope that more messages destined for the same broker will be sent, allowing them to be batched into a single produce request. Batching is a compromise between higher latency in return for higher throughput. Time-based batching is configured using linger.ms , and size-based batching is configured using batch.size . Compression ( compression.type ) Message compression adds latency in the producer (CPU time spent compressing the messages), but makes requests (and potentially disk writes) smaller, which can increase throughput. Whether compression is worthwhile, and the best compression to use, will depend on the messages being sent. Compression happens on the thread which calls KafkaProducer.send() , so if the latency of this method matters for your application you should consider using more threads. Pipelining ( max.in.flight.requests.per.connection ) Pipelining means sending more requests before the response to a request has been received. In general more pipelining means better throughput, up to a threshold at which other effects, such as worse batching, start to counteract the effect on throughput. Lowering latency When your application calls KafkaProducer.send() the messages are: Processed by any interceptors Serialized Assigned to a partition Compressed Added to a batch of messages in a per-partition queue At which point the send() method returns. So the time send() is blocked is determined by: The time spent in the interceptors, serializers and partitioner The compression algorithm used The time spent waiting for a buffer to use for compression Batches will remain in the queue until one of the following occurs: The batch is full (according to batch.size ) The delay introduced by linger.ms has passed The sender is about to send message batches for other partitions to the same broker, and it is possible to add this batch too The producer is being flushed or closed Look at the configuration for batching and buffering to mitigate the impact of send() blocking on latency. # ... linger.ms=100 1 batch.size=16384 2 buffer.memory=33554432 3 # ... 1 The linger property adds a delay in milliseconds so that larger batches of messages are accumulated and sent in a request. The default is 0'. 2 If a maximum batch.size in bytes is used, a request is sent when the maximum is reached, or messages have been queued for longer than linger.ms (whichever comes sooner). Adding the delay allows batches to accumulate messages up to the batch size. 3 The buffer size must be at least as big as the batch size, and be able to accommodate buffering, compression and in-flight requests. Increasing throughput Improve throughput of your message requests by adjusting the maximum time to wait before a message is delivered and completes a send request. You can also direct messages to a specified partition by writing a custom partitioner to replace the default. # ... delivery.timeout.ms=120000 1 partitioner.class=my-custom-partitioner 2 # ... 1 The maximum time in milliseconds to wait for a complete send request. You can set the value to MAX_LONG to delegate to Kafka an indefinite number of retries. The default is 120000 or 2 minutes. 2 Specify the class name of the custom partitioner. 6.1.3. Kafka consumer configuration tuning Use a basic consumer configuration with optional properties that are tailored to specific use cases. When tuning your consumers your primary concern will be ensuring that they cope efficiently with the amount of data ingested. As with the producer tuning, be prepared to make incremental changes until the consumers operate as expected. 6.1.3.1. Basic consumer configuration Connection and deserializer properties are required for every consumer. Generally, it is good practice to add a client id for tracking. In a consumer configuration, irrespective of any subsequent configuration: The consumer fetches from a given offset and consumes the messages in order, unless the offset is changed to skip or re-read messages. The broker does not know if the consumer processed the responses, even when committing offsets to Kafka, because the offsets might be sent to a different broker in the cluster. Basic consumer configuration properties # ... bootstrap.servers=localhost:9092 1 key.deserializer=org.apache.kafka.common.serialization.StringDeserializer 2 value.deserializer=org.apache.kafka.common.serialization.StringDeserializer 3 client.id=my-client 4 group.id=my-group-id 5 # ... 1 (Required) Tells the consumer to connect to a Kafka cluster using a host:port bootstrap server address for a Kafka broker. The consumer uses the address to discover and connect to all brokers in the cluster. Use a comma-separated list to specify two or three addresses in case a server is down, but it is not necessary to provide a list of all the brokers in the cluster. If you are using a loadbalancer service to expose the Kafka cluster, you only need the address for the service because the availability is handled by the loadbalancer. 2 (Required) Deserializer to transform the bytes fetched from the Kafka broker into message keys. 3 (Required) Deserializer to transform the bytes fetched from the Kafka broker into message values. 4 (Optional) The logical name for the client, which is used in logs and metrics to identify the source of a request. The id can also be used to throttle consumers based on processing time quotas. 5 (Conditional) A group id is required for a consumer to be able to join a consumer group. 6.1.3.2. Scaling data consumption using consumer groups Consumer groups share a typically large data stream generated by one or multiple producers from a given topic. Consumers are grouped using a group.id property, allowing messages to be spread across the members. One of the consumers in the group is elected leader and decides how the partitions are assigned to the consumers in the group. Each partition can only be assigned to a single consumer. If you do not already have as many consumers as partitions, you can scale data consumption by adding more consumer instances with the same group.id . Adding more consumers to a group than there are partitions will not help throughput, but it does mean that there are consumers on standby should one stop functioning. If you can meet throughput goals with fewer consumers, you save on resources. Consumers within the same consumer group send offset commits and heartbeats to the same broker. So the greater the number of consumers in the group, the higher the request load on the broker. # ... group.id=my-group-id 1 # ... 1 Add a consumer to a consumer group using a group id. 6.1.3.3. Message ordering guarantees Kafka brokers receive fetch requests from consumers that ask the broker to send messages from a list of topics, partitions and offset positions. A consumer observes messages in a single partition in the same order that they were committed to the broker, which means that Kafka only provides ordering guarantees for messages in a single partition. Conversely, if a consumer is consuming messages from multiple partitions, the order of messages in different partitions as observed by the consumer does not necessarily reflect the order in which they were sent. If you want a strict ordering of messages from one topic, use one partition per consumer. 6.1.3.4. Optimizing throughput and latency Control the number of messages returned when your client application calls KafkaConsumer.poll() . Use the fetch.max.wait.ms and fetch.min.bytes properties to increase the minimum amount of data fetched by the consumer from the Kafka broker. Time-based batching is configured using fetch.max.wait.ms , and size-based batching is configured using fetch.min.bytes . If CPU utilization in the consumer or broker is high, it might be because there are too many requests from the consumer. You can adjust fetch.max.wait.ms and fetch.min.bytes properties higher so that there are fewer requests and messages are delivered in bigger batches. By adjusting higher, throughput is improved with some cost to latency. You can also adjust higher if the amount of data being produced is low. For example, if you set fetch.max.wait.ms to 500ms and fetch.min.bytes to 16384 bytes, when Kafka receives a fetch request from the consumer it will respond when the first of either threshold is reached. Conversely, you can adjust the fetch.max.wait.ms and fetch.min.bytes properties lower to improve end-to-end latency. # ... fetch.max.wait.ms=500 1 fetch.min.bytes=16384 2 # ... 1 The maximum time in milliseconds the broker will wait before completing fetch requests. The default is 500 milliseconds. 2 If a minimum batch size in bytes is used, a request is sent when the minimum is reached, or messages have been queued for longer than fetch.max.wait.ms (whichever comes sooner). Adding the delay allows batches to accumulate messages up to the batch size. Lowering latency by increasing the fetch request size Use the fetch.max.bytes and max.partition.fetch.bytes properties to increase the maximum amount of data fetched by the consumer from the Kafka broker. The fetch.max.bytes property sets a maximum limit in bytes on the amount of data fetched from the broker at one time. The max.partition.fetch.bytes sets a maximum limit in bytes on how much data is returned for each partition, which must always be larger than the number of bytes set in the broker or topic configuration for max.message.bytes . The maximum amount of memory a client can consume is calculated approximately as: NUMBER-OF-BROKERS * fetch.max.bytes and NUMBER-OF-PARTITIONS * max.partition.fetch.bytes If memory usage can accommodate it, you can increase the values of these two properties. By allowing more data in each request, latency is improved as there are fewer fetch requests. # ... fetch.max.bytes=52428800 1 max.partition.fetch.bytes=1048576 2 # ... 1 The maximum amount of data in bytes returned for a fetch request. 2 The maximum amount of data in bytes returned for each partition. 6.1.3.5. Avoiding data loss or duplication when committing offsets The Kafka auto-commit mechanism allows a consumer to commit the offsets of messages automatically. If enabled, the consumer will commit offsets received from polling the broker at 5000ms intervals. The auto-commit mechanism is convenient, but it introduces a risk of data loss and duplication. If a consumer has fetched and transformed a number of messages, but the system crashes with processed messages in the consumer buffer when performing an auto-commit, that data is lost. If the system crashes after processing the messages, but before performing the auto-commit, the data is duplicated on another consumer instance after rebalancing. Auto-committing can avoid data loss only when all messages are processed before the poll to the broker, or the consumer closes. To minimize the likelihood of data loss or duplication, you can set enable.auto.commit to false and develop your client application to have more control over committing offsets. Or you can use auto.commit.interval.ms to decrease the intervals between commits. # ... enable.auto.commit=false 1 # ... 1 Auto commit is set to false to provide more control over committing offsets. By setting to enable.auto.commit to false , you can commit offsets after all processing has been performed and the message has been consumed. For example, you can set up your application to call the Kafka commitSync and commitAsync commit APIs. The commitSync API commits the offsets in a message batch returned from polling. You call the API when you are finished processing all the messages in the batch. If you use the commitSync API, the application will not poll for new messages until the last offset in the batch is committed. If this negatively affects throughput, you can commit less frequently, or you can use the commitAsync API. The commitAsync API does not wait for the broker to respond to a commit request, but risks creating more duplicates when rebalancing. A common approach is to combine both commit APIs in an application, with the commitSync API used just before shutting the consumer down or rebalancing to make sure the final commit is successful. 6.1.3.5.1. Controlling transactional messages Consider using transactional ids and enabling idempotence ( enable.idempotence=true ) on the producer side to guarantee exactly-once delivery. On the consumer side, you can then use the isolation.level property to control how transactional messages are read by the consumer. The isolation.level property has two valid values: read_committed read_uncommitted (default) Use read_committed to ensure that only transactional messages that have been committed are read by the consumer. However, this will cause an increase in end-to-end latency, because the consumer will not be able to return a message until the brokers have written the transaction markers that record the result of the transaction ( committed or aborted ). # ... enable.auto.commit=false isolation.level=read_committed 1 # ... 1 Set to read_committed so that only committed messages are read by the consumer. 6.1.3.6. Recovering from failure to avoid data loss Use the session.timeout.ms and heartbeat.interval.ms properties to configure the time taken to check and recover from consumer failure within a consumer group. The session.timeout.ms property specifies the maximum amount of time in milliseconds a consumer within a consumer group can be out of contact with a broker before being considered inactive and a rebalancing is triggered between the active consumers in the group. When the group rebalances, the partitions are reassigned to the members of the group. The heartbeat.interval.ms property specifies the interval in milliseconds between heartbeat checks to the consumer group coordinator to indicate that the consumer is active and connected. The heartbeat interval must be lower, usually by a third, than the session timeout interval. If you set the session.timeout.ms property lower, failing consumers are detected earlier, and rebalancing can take place quicker. However, take care not to set the timeout so low that the broker fails to receive a heartbeat in time and triggers an unnecessary rebalance. Decreasing the heartbeat interval reduces the chance of accidental rebalancing, but more frequent heartbeats increases the overhead on broker resources. 6.1.3.7. Managing offset policy Use the auto.offset.reset property to control how a consumer behaves when no offsets have been committed, or a committed offset is no longer valid or deleted. Suppose you deploy a consumer application for the first time, and it reads messages from an existing topic. Because this is the first time the group.id is used, the __consumer_offsets topic does not contain any offset information for this application. The new application can start processing all existing messages from the start of the log or only new messages. The default reset value is latest , which starts at the end of the partition, and consequently means some messages are missed. To avoid data loss, but increase the amount of processing, set auto.offset.reset to earliest to start at the beginning of the partition. Also consider using the earliest option to avoid messages being lost when the offsets retention period ( offsets.retention.minutes ) configured for a broker has ended. If a consumer group or standalone consumer is inactive and commits no offsets during the retention period, previously committed offsets are deleted from __consumer_offsets . # ... heartbeat.interval.ms=3000 1 session.timeout.ms=10000 2 auto.offset.reset=earliest 3 # ... 1 Adjust the heartbeat interval lower according to anticipated rebalances. 2 If no heartbeats are received by the Kafka broker before the timeout duration expires, the consumer is removed from the consumer group and a rebalance is initiated. If the broker configuration has a group.min.session.timeout.ms and group.max.session.timeout.ms , the session timeout value must be within that range. 3 Set to earliest to return to the start of a partition and avoid data loss if offsets were not committed. If the amount of data returned in a single fetch request is large, a timeout might occur before the consumer has processed it. In this case, you can lower max.partition.fetch.bytes or increase session.timeout.ms . 6.1.3.8. Minimizing the impact of rebalances The rebalancing of a partition between active consumers in a group is the time it takes for: Consumers to commit their offsets The new consumer group to be formed The group leader to assign partitions to group members The consumers in the group to receive their assignments and start fetching Clearly, the process increases the downtime of a service, particularly when it happens repeatedly during a rolling restart of a consumer group cluster. In this situation, you can use the concept of static membership to reduce the number of rebalances. Rebalancing assigns topic partitions evenly among consumer group members. Static membership uses persistence so that a consumer instance is recognized during a restart after a session timeout. The consumer group coordinator can identify a new consumer instance using a unique id that is specified using the group.instance.id property. During a restart, the consumer is assigned a new member id, but as a static member it continues with the same instance id, and the same assignment of topic partitions is made. If the consumer application does not make a call to poll at least every max.poll.interval.ms milliseconds, the consumer is considered to be failed, causing a rebalance. If the application cannot process all the records returned from poll in time, you can avoid a rebalance by using the max.poll.interval.ms property to specify the interval in milliseconds between polls for new messages from a consumer. Or you can use the max.poll.records property to set a maximum limit on the number of records returned from the consumer buffer, allowing your application to process fewer records within the max.poll.interval.ms limit. # ... group.instance.id= UNIQUE-ID 1 max.poll.interval.ms=300000 2 max.poll.records=500 3 # ... 1 The unique instance id ensures that a new consumer instance receives the same assignment of topic partitions. 2 Set the interval to check the consumer is continuing to process messages. 3 Sets the number of processed records returned from the consumer. 6.2. Setting limits on brokers using the Kafka Static Quota plugin Important The Kafka Static Quota plugin is a Technology Preview only. Technology Preview features are not supported with Red Hat production service-level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend implementing any Technology Preview features in production environments. This Technology Preview feature provides early access to upcoming product innovations, enabling you to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Use the Kafka Static Quota plugin to set throughput and storage limits on brokers in your Kafka cluster. You enable the plugin and set limits by adding properties to the Kafka configuration file. You can set a byte-rate threshold and storage quotas to put limits on the clients interacting with your brokers. You can set byte-rate thresholds for producer and consumer bandwidth. The total limit is distributed across all clients accessing the broker. For example, you can set a byte-rate threshold of 40 MBps for producers. If two producers are running, they are each limited to a throughput of 20 MBps. Storage quotas throttle Kafka disk storage limits between a soft limit and hard limit. The limits apply to all available disk space. Producers are slowed gradually between the soft and hard limit. The limits prevent disks filling up too quickly and exceeding their capacity. Full disks can lead to issues that are hard to rectify. The hard limit is the maximum storage limit. Note For JBOD storage, the limit applies across all disks. If a broker is using two 1 TB disks and the quota is 1.1 TB, one disk might fill and the other disk will be almost empty. Prerequisites AMQ Streams is installed on all hosts which will be used as Kafka brokers. A ZooKeeper cluster is configured and running . Procedure Edit the /opt/kafka/config/server.properties Kafka configuration file. The plugin properties are shown in this example configuration. Example Kafka Static Quota plugin configuration # ... client.quota.callback.class=io.strimzi.kafka.quotas.StaticQuotaCallback 1 client.quota.callback.static.produce=1000000 2 client.quota.callback.static.fetch=1000000 3 client.quota.callback.static.storage.soft=400000000000 4 client.quota.callback.static.storage.hard=500000000000 5 client.quota.callback.static.storage.check-interval=5 6 # ... 1 Loads the Kafka Static Quota plugin. 2 Sets the producer byte-rate threshold. 1 MBps in this example. 3 Sets the consumer byte-rate threshold. 1 MBps in this example. 4 Sets the lower soft limit for storage. 400 GB in this example. 5 Sets the higher hard limit for storage. 500 GB in this example. 6 Sets the interval in seconds between checks on storage. 5 seconds in this example. You can set this to 0 to disable the check. Start the Kafka broker with the default configuration file. su - kafka /opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties Verify that the Kafka broker is running. jcmd | grep Kafka Additional resources Kafka broker configuration tuning 6.3. Scaling Clusters 6.3.1. Scaling Kafka clusters 6.3.1.1. Adding brokers to a cluster The primary way of increasing throughput for a topic is to increase the number of partitions for that topic. That works because the partitions allow the load for that topic to be shared between the brokers in the cluster. When the brokers are all constrained by some resource (typically I/O), then using more partitions will not yield an increase in throughput. Instead, you must add brokers to the cluster. When you add an extra broker to the cluster, AMQ Streams does not assign any partitions to it automatically. You have to decide which partitions to move from the existing brokers to the new broker. Once the partitions have been redistributed between all brokers, each broker should have a lower resource utilization. 6.3.1.2. Removing brokers from the cluster Before you remove a broker from a cluster, you must ensure that it is not assigned to any partitions. You should decide which remaining brokers will be responsible for each of the partitions on the broker being decommissioned. Once the broker has no assigned partitions, you can stop it. 6.3.2. Reassignment of partitions The kafka-reassign-partitions.sh utility is used to reassign partitions to different brokers. It has three different modes: --generate Takes a set of topics and brokers and generates a reassignment JSON file which will result in the partitions of those topics being assigned to those brokers. It is an easy way to generate a reassignment JSON file , but it operates on whole topics, so its use is not always appropriate. --execute Takes a reassignment JSON file and applies it to the partitions and brokers in the cluster. Brokers which are gaining partitions will become followers of the partition leader. For a given partition, once the new broker has caught up and joined the ISR the old broker will stop being a follower and will delete its replica. --verify Using the same reassignment JSON file as the --execute step, --verify checks whether all of the partitions in the file have been moved to their intended brokers. If the reassignment is complete it will also remove any throttles which are in effect. Unless removed, throttles will continue to affect the cluster even after the reassignment has finished. It is only possible to have one reassignment running in the cluster at any given time, and it is not possible to cancel a running reassignment. If you need to cancel a reassignment you have to wait for it to complete and then perform another reassignment to revert the effects of the first one. The kafka-reassign-partitions.sh will print the reassignment JSON for this reversion as part of its output. Very large reassignments should be broken down into a number of smaller reassignments in case there is a need to stop in-progress reassignment. 6.3.2.1. Reassignment JSON file The reassignment JSON file has a specific structure: Where <PartitionObjects> is a comma-separated list of objects like: The "log_dirs" property is optional and is used to move the partition to a specific log directory. The following is an example reassignment JSON file that assigns topic topic-a , partition 4 to brokers 2 , 4 and 7 , and topic topic-b partition 2 to brokers 1 , 5 and 7 : { "version": 1, "partitions": [ { "topic": "topic-a", "partition": 4, "replicas": [2,4,7] }, { "topic": "topic-b", "partition": 2, "replicas": [1,5,7] } ] } Partitions not included in the JSON are not changed. 6.3.2.2. Generating reassignment JSON files The easiest way to assign all the partitions for a given set of topics to a given set of brokers is to generate a reassignment JSON file using the kafka-reassign-partitions.sh --generate , command. bin/kafka-reassign-partitions.sh --zookeeper <ZooKeeper> --topics-to-move-json-file <TopicsFile> --broker-list <BrokerList> --generate The <TopicsFile> is a JSON file which lists the topics to move. It has the following structure: where <TopicObjects> is a comma-separated list of objects like: For example to move all the partitions of topic-a and topic-b to brokers 4 and 7 bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 --topics-to-move-json-file topics-to-be-moved.json --broker-list 4,7 --generate where topics-to-be-moved.json has contents: { "version": 1, "topics": [ { "topic": "topic-a"}, { "topic": "topic-b"} ] } 6.3.2.3. Creating reassignment JSON files manually You can manually create the reassignment JSON file if you want to move specific partitions. 6.3.3. Reassignment throttles Reassigning partitions can be a slow process because it can require moving lots of data between brokers. To avoid this having a detrimental impact on clients it is possible to throttle the reassignment. Using a throttle can mean the reassignment takes longer. If the throttle is too low then the newly assigned brokers will not be able to keep up with records being published and the reassignment will never complete. If the throttle is too high then clients will be impacted. For example, for producers, this could manifest as higher than normal latency waiting for acknowledgement. For consumers, this could manifest as a drop in throughput caused by higher latency between polls. 6.3.4. Scaling up a Kafka cluster This procedure describes how to increase the number of brokers in a Kafka cluster. Prerequisites An existing Kafka cluster. A new machine with the AMQ broker installed . A reassignment JSON file of how partitions should be reassigned to brokers in the enlarged cluster. Procedure Create a configuration file for the new broker using the same settings as for the other brokers in your cluster, except for broker.id which should be a number that is not already used by any of the other brokers. Start the new Kafka broker passing the configuration file you created in the step as the argument to the kafka-server-start.sh script: su - kafka /opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties Verify that the Kafka broker is running. jcmd | grep Kafka Repeat the above steps for each new broker. Execute the partition reassignment using the kafka-reassign-partitions.sh command line tool. kafka-reassign-partitions.sh --zookeeper <ZooKeeperHostAndPort> --reassignment-json-file <ReassignmentJsonFile> --execute If you are going to throttle replication you can also pass the --throttle option with an inter-broker throttled rate in bytes per second. For example: kafka-reassign-partitions.sh --zookeeper zookeeper1:2181 --reassignment-json-file reassignment.json --throttle 5000000 --execute This command will print out two reassignment JSON objects. The first records the current assignment for the partitions being moved. You should save this to a file in case you need to revert the reassignment later on. The second JSON object is the target reassignment you have passed in your reassignment JSON file. If you need to change the throttle during reassignment you can use the same command line with a different throttled rate. For example: kafka-reassign-partitions.sh --zookeeper zookeeper1:2181 --reassignment-json-file reassignment.json --throttle 10000000 --execute Periodically verify whether the reassignment has completed using the kafka-reassign-partitions.sh command line tool. This is the same command as the step but with the --verify option instead of the --execute option. kafka-reassign-partitions.sh --zookeeper <ZooKeeperHostAndPort> --reassignment-json-file <ReassignmentJsonFile> --verify For example: kafka-reassign-partitions.sh --zookeeper zookeeper1:2181 --reassignment-json-file reassignment.json --verify The reassignment has finished when the --verify command reports each of the partitions being moved as completed successfully. This final --verify will also have the effect of removing any reassignment throttles. You can now delete the revert file if you saved the JSON for reverting the assignment to their original brokers. 6.3.5. Scaling down a Kafka cluster Additional resources This procedure describes how to decrease the number of brokers in a Kafka cluster. Prerequisites An existing Kafka cluster. A reassignment JSON file of how partitions should be reassigned to brokers in the cluster once the broker(s) have been removed. Procedure Execute the partition reassignment using the kafka-reassign-partitions.sh command line tool. kafka-reassign-partitions.sh --zookeeper <ZooKeeperHostAndPort> --reassignment-json-file <ReassignmentJsonFile> --execute If you are going to throttle replication you can also pass the --throttle option with an inter-broker throttled rate in bytes per second. For example: kafka-reassign-partitions.sh --zookeeper zookeeper1:2181 --reassignment-json-file reassignment.json --throttle 5000000 --execute This command will print out two reassignment JSON objects. The first records the current assignment for the partitions being moved. You should save this to a file in case you need to revert the reassignment later on. The second JSON object is the target reassignment you have passed in your reassignment JSON file. If you need to change the throttle during reassignment you can use the same command line with a different throttled rate. For example: kafka-reassign-partitions.sh --zookeeper zookeeper1:2181 --reassignment-json-file reassignment.json --throttle 10000000 --execute Periodically verify whether the reassignment has completed using the kafka-reassign-partitions.sh command line tool. This is the same command as the step but with the --verify option instead of the --execute option. kafka-reassign-partitions.sh --zookeeper <ZooKeeperHostAndPort> --reassignment-json-file <ReassignmentJsonFile> --verify For example: kafka-reassign-partitions.sh --zookeeper zookeeper1:2181 --reassignment-json-file reassignment.json --verify The reassignment has finished when the --verify command reports each of the partitions being moved as completed successfully. This final --verify will also have the effect of removing any reassignment throttles. You can now delete the revert file if you saved the JSON for reverting the assignment to their original brokers. Once all the partition reassignments have finished, the broker being removed should not have responsibility for any of the partitions in the cluster. You can verify this by checking each of the directories given in the broker's log.dirs configuration parameters. If any of the log directories on the broker contains a directory that does not match the extended regular expression \.[a-z0-9] -deleteUSD then the broker still has live partitions and it should not be stopped. You can check this by executing the command: ls -l <LogDir> | grep -E '^d' | grep -vE '[a-zA-Z0-9.-]+\.[a-z0-9]+-deleteUSD' If the above command prints any output then the broker still has live partitions. In this case, either the reassignment has not finished, or the reassignment JSON file was incorrect. Once you have confirmed that the broker has no live partitions you can stop it. su - kafka /opt/kafka/bin/kafka-server-stop.sh Confirm that the Kafka broker is stopped. jcmd | grep kafka 6.3.6. Scaling up a ZooKeeper cluster This procedure describes how to add servers (nodes) to a ZooKeeper cluster. The dynamic reconfiguration feature of ZooKeeper maintains a stable ZooKeeper cluster during the scale up process. Prerequisites Dynamic reconfiguration is enabled in the ZooKeeper configuration file ( reconfigEnabled=true ). ZooKeeper authentication is enabled and you can access the new server using the authentication mechanism. Procedure Perform the following steps for each ZooKeeper server, one at a time: Add a server to the ZooKeeper cluster as described in Section 3.3, "Running multi-node ZooKeeper cluster" and then start ZooKeeper. Note the IP address and configured access ports of the new server. Start a zookeeper-shell session for the server. Run the following command from a machine that has access to the cluster (this might be one of the ZooKeeper nodes or your local machine, if it has access). su - kafka /opt/kafka/bin/zookeeper-shell.sh <ip-address>:<zk-port> In the shell session, with the ZooKeeper node running, enter the following line to add the new server to the quorum as a voting member: reconfig -add server.<positive-id> = <address1>:<port1>:<port2>[:role];[<client-port-address>:]<client-port> For example: reconfig -add server.4=172.17.0.4:2888:3888:participant;172.17.0.4:2181 Where <positive-id> is the new server ID 4 . For the two ports, <port1> 2888 is for communication between ZooKeeper servers, and <port2> 3888 is for leader election. The new configuration propagates to the other servers in the ZooKeeper cluster; the new server is now a full member of the quorum. Repeat steps 1-4 for the other servers that you want to add. Additional resources Section 6.3.7, "Scaling down a ZooKeeper cluster" 6.3.7. Scaling down a ZooKeeper cluster This procedure describes how to remove servers (nodes) from a ZooKeeper cluster. The dynamic reconfiguration feature of ZooKeeper maintains a stable ZooKeeper cluster during the scale down process. Prerequisites Dynamic reconfiguration is enabled in the ZooKeeper configuration file ( reconfigEnabled=true ). ZooKeeper authentication is enabled and you can access the new server using the authentication mechanism. Procedure Perform the following steps for each ZooKeeper server, one at a time: Log in to the zookeeper-shell on one of the servers that will be retained after the scale down (for example, server 1). Note Access the server using the authentication mechanism configured for the ZooKeeper cluster. Remove a server, for example server 5. Deactivate the server that you removed. Repeat steps 1-3 to reduce the cluster size. Additional resources Section 6.3.6, "Scaling up a ZooKeeper cluster" Removing servers in the ZooKeeper documentation
|
[
"num.partitions=1 default.replication.factor=3 offsets.topic.replication.factor=3 transaction.state.log.replication.factor=3 transaction.state.log.min.isr=2 log.retention.hours=168 log.segment.bytes=1073741824 log.retention.check.interval.ms=300000 num.network.threads=3 num.io.threads=8 num.recovery.threads.per.data.dir=1 socket.send.buffer.bytes=102400 socket.receive.buffer.bytes=102400 socket.request.max.bytes=104857600 group.initial.rebalance.delay.ms=0 zookeeper.connection.timeout.ms=6000",
"num.partitions=1 auto.create.topics.enable=false default.replication.factor=3 min.insync.replicas=2 replica.fetch.max.bytes=1048576",
"auto.create.topics.enable=false delete.topic.enable=true",
"transaction.state.log.replication.factor=3 transaction.state.log.min.isr=2",
"offsets.topic.num.partitions=50 offsets.topic.replication.factor=3",
"num.network.threads=3 1 queued.max.requests=500 2 num.io.threads=8 3 num.recovery.threads.per.data.dir=1 4",
"replica.socket.receive.buffer.bytes=65536",
"socket.request.max.bytes=104857600",
"socket.send.buffer.bytes=1048576 socket.receive.buffer.bytes=1048576",
"log.segment.bytes=1073741824 log.roll.ms=604800000",
"log.retention.ms=1680000",
"log.retention.bytes=1073741824",
"log.segment.delete.delay.ms=60000",
"log.cleaner.enable=true",
"log.cleanup.policy=compact,delete",
"log.retention.check.interval.ms=300000",
"log.cleaner.backoff.ms=15000",
"log.cleaner.delete.retention.ms=86400000",
"log.cleaner.dedupe.buffer.size=134217728 log.cleaner.io.buffer.load.factor=0.9",
"log.cleaner.threads=8",
"log.cleaner.io.max.bytes.per.second=1.7976931348623157E308",
"log.flush.scheduler.interval.ms=2000",
"log.flush.interval.ms=50000 log.flush.interval.messages=100000",
"replica.lag.time.max.ms=30000",
"# auto.leader.rebalance.enable=true leader.imbalance.check.interval.seconds=300 leader.imbalance.per.broker.percentage=10 #",
"unclean.leader.election.enable=false",
"group.initial.rebalance.delay.ms=3000",
"bootstrap.servers=localhost:9092 1 key.serializer=org.apache.kafka.common.serialization.StringSerializer 2 value.serializer=org.apache.kafka.common.serialization.StringSerializer 3 client.id=my-client 4 compression.type=gzip 5",
"acks=all 1",
"min.insync.replicas=2 1",
"enable.idempotence=true 1 max.in.flight.requests.per.connection=5 2 acks=all 3 retries=2147483647 4",
"enable.idempotence=false 1 max.in.flight.requests.per.connection=1 2 retries=2147483647",
"enable.idempotence=true max.in.flight.requests.per.connection=5 acks=all retries=2147483647 transactional.id= UNIQUE-ID 1 transaction.timeout.ms=900000 2",
"linger.ms=100 1 batch.size=16384 2 buffer.memory=33554432 3",
"delivery.timeout.ms=120000 1 partitioner.class=my-custom-partitioner 2",
"bootstrap.servers=localhost:9092 1 key.deserializer=org.apache.kafka.common.serialization.StringDeserializer 2 value.deserializer=org.apache.kafka.common.serialization.StringDeserializer 3 client.id=my-client 4 group.id=my-group-id 5",
"group.id=my-group-id 1",
"fetch.max.wait.ms=500 1 fetch.min.bytes=16384 2",
"NUMBER-OF-BROKERS * fetch.max.bytes and NUMBER-OF-PARTITIONS * max.partition.fetch.bytes",
"fetch.max.bytes=52428800 1 max.partition.fetch.bytes=1048576 2",
"enable.auto.commit=false 1",
"enable.auto.commit=false isolation.level=read_committed 1",
"heartbeat.interval.ms=3000 1 session.timeout.ms=10000 2 auto.offset.reset=earliest 3",
"group.instance.id= UNIQUE-ID 1 max.poll.interval.ms=300000 2 max.poll.records=500 3",
"client.quota.callback.class=io.strimzi.kafka.quotas.StaticQuotaCallback 1 client.quota.callback.static.produce=1000000 2 client.quota.callback.static.fetch=1000000 3 client.quota.callback.static.storage.soft=400000000000 4 client.quota.callback.static.storage.hard=500000000000 5 client.quota.callback.static.storage.check-interval=5 6",
"su - kafka /opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties",
"jcmd | grep Kafka",
"{ \"version\": 1, \"partitions\": [ <PartitionObjects> ] }",
"{ \"topic\": <TopicName> , \"partition\": <Partition> , \"replicas\": [ <AssignedBrokerIds> ], \"log_dirs\": [ <LogDirs> ] }",
"{ \"version\": 1, \"partitions\": [ { \"topic\": \"topic-a\", \"partition\": 4, \"replicas\": [2,4,7] }, { \"topic\": \"topic-b\", \"partition\": 2, \"replicas\": [1,5,7] } ] }",
"bin/kafka-reassign-partitions.sh --zookeeper <ZooKeeper> --topics-to-move-json-file <TopicsFile> --broker-list <BrokerList> --generate",
"{ \"version\": 1, \"topics\": [ <TopicObjects> ] }",
"{ \"topic\": <TopicName> }",
"bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 --topics-to-move-json-file topics-to-be-moved.json --broker-list 4,7 --generate",
"{ \"version\": 1, \"topics\": [ { \"topic\": \"topic-a\"}, { \"topic\": \"topic-b\"} ] }",
"su - kafka /opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties",
"jcmd | grep Kafka",
"kafka-reassign-partitions.sh --zookeeper <ZooKeeperHostAndPort> --reassignment-json-file <ReassignmentJsonFile> --execute",
"kafka-reassign-partitions.sh --zookeeper zookeeper1:2181 --reassignment-json-file reassignment.json --throttle 5000000 --execute",
"kafka-reassign-partitions.sh --zookeeper zookeeper1:2181 --reassignment-json-file reassignment.json --throttle 10000000 --execute",
"kafka-reassign-partitions.sh --zookeeper <ZooKeeperHostAndPort> --reassignment-json-file <ReassignmentJsonFile> --verify",
"kafka-reassign-partitions.sh --zookeeper zookeeper1:2181 --reassignment-json-file reassignment.json --verify",
"kafka-reassign-partitions.sh --zookeeper <ZooKeeperHostAndPort> --reassignment-json-file <ReassignmentJsonFile> --execute",
"kafka-reassign-partitions.sh --zookeeper zookeeper1:2181 --reassignment-json-file reassignment.json --throttle 5000000 --execute",
"kafka-reassign-partitions.sh --zookeeper zookeeper1:2181 --reassignment-json-file reassignment.json --throttle 10000000 --execute",
"kafka-reassign-partitions.sh --zookeeper <ZooKeeperHostAndPort> --reassignment-json-file <ReassignmentJsonFile> --verify",
"kafka-reassign-partitions.sh --zookeeper zookeeper1:2181 --reassignment-json-file reassignment.json --verify",
"ls -l <LogDir> | grep -E '^d' | grep -vE '[a-zA-Z0-9.-]+\\.[a-z0-9]+-deleteUSD'",
"su - kafka /opt/kafka/bin/kafka-server-stop.sh",
"jcmd | grep kafka",
"su - kafka /opt/kafka/bin/zookeeper-shell.sh <ip-address>:<zk-port>",
"reconfig -add server.<positive-id> = <address1>:<port1>:<port2>[:role];[<client-port-address>:]<client-port>",
"reconfig -add server.4=172.17.0.4:2888:3888:participant;172.17.0.4:2181",
"reconfig -remove 5"
] |
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.1/html/using_amq_streams_on_rhel/assembly-managing-kafka-str
|
C.5.2. Add a New Passphrase to an Existing Device
|
C.5.2. Add a New Passphrase to an Existing Device After being prompted for any one of the existing passphrases for authentication, you will be prompted to enter the new passphrase.
|
[
"cryptsetup luksAddKey <device>"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/apcs05s02
|
Distributed tracing
|
Distributed tracing OpenShift Container Platform 4.9 Distributed tracing installation, usage, and release notes Red Hat OpenShift Documentation Team
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/distributed_tracing/index
|
Chapter 10. Managing errata
|
Chapter 10. Managing errata As a part of Red Hat's quality control and release process, we provide customers with updates for each release of official Red Hat RPMs. Red Hat compiles groups of related packages into an erratum along with an advisory that provides a description of the update. There are three types of advisories (in order of importance): Security Advisory Describes fixed security issues found in the package. The security impact of the issue can be Low, Moderate, Important, or Critical. Bug Fix Advisory Describes bug fixes for the package. Product Enhancement Advisory Describes enhancements and new features added to the package. Red Hat Satellite imports this errata information when synchronizing repositories with Red Hat's Content Delivery Network (CDN). Red Hat Satellite also provides tools to inspect and filter errata, allowing for precise update management. This way, you can select relevant updates and propagate them through content views to selected content hosts. Errata are labeled according to the most important advisory type they contain. Therefore, errata labeled as Product Enhancement Advisory can contain only enhancement updates, while Bug Fix Advisory errata can contain both bug fixes and enhancements, and Security Advisory can contain all three types. In Red Hat Satellite, there are two keywords that describe an erratum's relationship to the available content hosts: Applicable An erratum that applies to one or more content hosts, which means it updates packages present on the content host. Although these errata apply to content hosts, until their state changes to Installable , the errata are not ready to be installed. Installable errata are automatically applicable. Installable An erratum that applies to one or more content hosts and is available to install on the content host. Installable errata are available to a content host from lifecycle environment and the associated content view, but are not yet installed. This chapter shows how to manage errata and apply them to either a single host or multiple hosts. 10.1. Best practices for errata Use errata to add patches for security issues to a frozen set of content without unnecessarily updating other unaffected packages. Automate errata management by using a Hammer script or an Ansible Playbook . View errata on the content hosts page and compare the errata of the current content view and lifecycle environment to the Library lifecycle environment, which contains the latest synchronized packages. You can only apply errata included in the content view version of the lifecycle of your host. You can view applicable errata as a recommendation to create an incremental content view to provide errata to hosts. For more information, see Section 10.9, "Adding errata to an incremental content view" . 10.2. Inspecting available errata The following procedure describes how to view and filter the available errata and how to display metadata of the selected advisory. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Content > Content Types > Errata to view the list of available errata. Use the filtering tools at the top of the page to limit the number of displayed errata: Select the repository to be inspected from the list. All Repositories is selected by default. The Applicable checkbox is selected by default to view only applicable errata in the selected repository. Select the Installable checkbox to view only errata marked as installable. To search the table of errata, type the query in the Search field in the form of: See Section 10.3, "Parameters available for errata search" for the list of parameters available for search. Find the list of applicable operators in Supported Operators for Granular Search in Administering Red Hat Satellite . Automatic suggestion works as you type. You can also combine queries with the use of and and or operators. For example, to display only security advisories related to the kernel package, type: Press Enter to start the search. Click the Errata ID of the erratum you want to inspect: The Details tab contains the description of the updated package as well as documentation of important fixes and enhancements provided by the update. On the Content Hosts tab, you can apply the erratum to selected content hosts as described in Section 10.11, "Applying errata to multiple hosts" . The Repositories tab lists repositories that already contain the erratum. You can filter repositories by the environment and content view, and search for them by the repository name. You can also use the new Host page to view to inspect available errata and select errata to install. In the Satellite web UI, navigate to Hosts > All Hosts and select the host you require. If there are errata associated with the host, an Installable Errata card on the new Host page displays an interactive pie chart showing a breakdown of the security advisories, bugfixes, and enhancements. On the new Host page, select the Content tab. On the Content page select the Errata tab. The page displays installable errata for the chosen host. Click the checkbox for any errata you wish to install. Select Apply via Remote Execution to use Remote Execution, or Apply via customized remote execution if you want to customize the remote execution. Click Submit . CLI procedure To view errata that are available for all organizations, enter the following command: To view details of a specific erratum, enter the following command: You can search errata by entering the query with the --search option. For example, to view applicable errata for the selected product that contains the specified bugs ordered so that the security errata are displayed on top, enter the following command: 10.3. Parameters available for errata search Parameter Description Example bug Search by the Bugzilla number. bug = 1172165 cve Search by the CVE number. cve = CVE-2015-0235 id Search by the errata ID. The auto-suggest system displays a list of available IDs as you type. id = RHBA-2014:2004 issued Search by the issue date. You can specify the exact date, like "Feb16,2015", or use keywords, for example "Yesterday", or "1 hour ago". The time range can be specified with the use of the "<" and ">" operators. issued < "Jan 12,2015" package Search by the full package build name. The auto-suggest system displays a list of available packages as you type. package = glib2-2.22.5-6.el6.i686 package_name Search by the package name. The auto-suggest system displays a list of available packages as you type. package_name = glib2 severity Search by the severity of the issue fixed by the security update. Specify Critical , Important , or Moderate . severity = Critical title Search by the advisory title. title ~ openssl type Search by the advisory type. Specify security , bugfix , or enhancement . type = bugfix updated Search by the date of the last update. You can use the same formats as with the issued parameter. updated = "6 days ago" 10.4. Applying installable errata Use the following procedure to view a list of installable errata and select errata to install. Procedure In the Satellite web UI, navigate to Hosts > All Hosts and select the host you require. If there are errata associated with the host, they are displayed in an Installable Errata card on the new Host page. On the Content tab, Errata displays installable errata for the chosen host. Click the checkbox for any errata you wish to install. Using the vertical ellipsis icon to the errata you want to add to the host, select Apply via Remote Execution to use Remote Execution. Select Apply via customized remote execution if you want to customize the remote execution. Click Submit . 10.5. Running custom code while applying errata You can use custom snippets to run code before and/or after applying errata on hosts. Prerequisites Check your provisioning template to ensure that it supports the custom snippets you want to use. You can view all job templates that are in use under Administer > Remote Execution Features . Procedure In the Satellite web UI, navigate to Hosts > Templates > Job Templates . Click Create Template . In the Name field, enter a name for your custom snippet. The name must start with the name of a template that supports custom snippets: Append custom pre to the name of a template to run code before applying errata on hosts. Append custom post to the name of a template to run code after applying errata on hosts. If your template is called Install Errata - Katello Ansible Default , name your template Install Errata - Katello Ansible Default custom pre or Install Errata - Katello Ansible Default custom post . On the Type tab, select Snippet . Click Submit to create your custom snippet. CLI procedure Create a plain text file that contains your custom snippet. Create the template using hammer : 10.6. Subscribing to errata notifications You can configure email notifications for Satellite users. Users receive a summary of applicable and installable errata, notifications on content view promotion or after synchronizing a repository. For more information, see Configuring Email Notification Preferences in Administering Red Hat Satellite . 10.7. Limitations to repository dependency resolution With Satellite, using incremental updates to your content views solves some repository dependency problems. However, dependency resolution at a repository level still remains problematic on occasion. When a repository update becomes available with a new dependency, Satellite retrieves the newest version of the package to solve the dependency, even if there are older versions available in the existing repository package. This can create further dependency resolution problems when installing packages. Example scenario A repository on your client has the package example_repository-1.0 with the dependency example_repository-libs-1.0 . The repository also has another package example_tools-1.0 . A security erratum becomes available with the package example_tools-1.1 . The example_tools-1.1 package requires the example_repository-libs-1.1 package as a dependency. After an incremental content view update, the example_tools-1.1 , example_tools-1.0 , and example_repository-libs-1.1 are now in the repository. The repository also has the packages example_repository-1.0 and example_repository-libs-1.0 . Note that the incremental update to the content view did not add the package example_repository-1.1 . Because you can install all these packages by using dnf , no potential problem is detected. However, when the client installs the example_tools-1.1 package, a dependency resolution problem occurs because both example_repository-libs-1.0 and example_repository-libs-1.1 cannot be installed. There is currently no workaround for this problem. The larger the time frame, and minor Y releases between the base set of packages and the errata being applied, the higher the chance of a problem with dependency resolution. 10.8. Creating a content view filter for errata You can use content filters to limit errata. Such filters include: ID - Select specific erratum to allow into your resulting repositories. Date Range - Define a date range and include a set of errata released during that date range. Type - Select the type of errata to include such as bug fixes, enhancements, and security updates. Create a content filter to exclude errata after a certain date. This ensures your production systems in the application lifecycle are kept up to date to a certain point. Then you can modify the filter's start date to introduce new errata into your testing environment to test the compatibility of new packages into your application lifecycle. To use the CLI instead of the Satellite web UI, see the CLI procedure . Prerequisites A content view with the repositories that contain required errata is created. For more information, see Section 7.4, "Creating a content view" . Procedure In the Satellite web UI, navigate to Content > Lifecycle > Content Views . Select a content view that you want to use for applying errata. Select Yum Content > Filters and click New Filter . In the Name field, enter Errata Filter . From the Content Type list, select Erratum - Date and Type . From the Inclusion Type list, select Exclude . In the Description field, enter Exclude errata items from YYYY-MM-DD . Click Save . For Errata Type , select the checkboxes of errata types you want to exclude. For example, select the Enhancement and Bugfix checkboxes and clear the Security checkbox to exclude enhancement and bugfix errata after certain date, but include all the security errata. For Date Type , select one of two checkboxes: Issued On for the issued date of the erratum. Updated On for the date of the erratum's last update. Select the Start Date to exclude all errata on or after the selected date. Leave the End Date field blank. Click Save . Click Publish New Version to publish the resulting repository. Enter Adding errata filter in the Description field. Click Save . When the content view completes publication, notice the Content column reports a reduced number of packages and errata from the initial repository. This means the filter successfully excluded the all non-security errata from the last year. Click the Versions tab. Click Promote to the right of the published version. Select the environments you want to promote the content view version to. In the Description field, enter the description for promoting. Click Promote Version to promote this content view version across the required environments. CLI procedure Create a filter for the errata: Create a filter rule to exclude all errata on or after the Start Date that you want to set: Publish the content view: Promote the content view to the lifecycle environment so that the included errata are available to that lifecycle environment: 10.9. Adding errata to an incremental content view If errata are available but not installable, you can create an incremental content view version to add the errata to your content hosts. For example, if the content view is version 1.0, it becomes content view version 1.1, and when you publish, it becomes content view version 2.0. Important If your content view version is old, you might encounter incompatibilities when incrementally adding enhancement errata. This is because enhancements are typically designed for the most current software in a repository. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Content > Content Types > Errata . From the Errata list, click the name of the errata that you want to apply. Select the content hosts that you want to apply the errata to, and click Apply to Hosts . This creates the incremental update to the content view. If you want to apply the errata to the content host, select the Apply Errata to Content Hosts immediately after publishing checkbox. Click Confirm to apply the errata. CLI procedure List the errata and its corresponding IDs: List the different content-view versions and the corresponding IDs: Apply a single erratum to content-view version. You can add more IDs in a comma-separated list. 10.10. Applying errata to hosts Use these procedures to review and apply errata to hosts. Prerequisites Synchronize Red Hat Satellite repositories with the latest errata available from Red Hat. For more information, see Section 4.7, "Synchronizing repositories" . Register the host to an environment and content view on Satellite Server. For more information, see Registering Hosts in Managing hosts . Configure the host for remote execution. For more information about running remote execution jobs, see Configuring and Setting Up Remote Jobs in Managing hosts . The procedure to apply an erratum to a host depends on its operating system. 10.10.1. Applying errata to hosts running Red Hat Enterprise Linux 9 Use this procedure to review and apply errata to a host running Red Hat Enterprise Linux 9. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Hosts > Content Hosts and select the host you want to apply errata to. Navigate to the Errata tab to see the list of errata. Select the errata to apply and click Apply Selected . In the confirmation window, click Apply . After the task to update all packages associated with the selected errata completes, click the Details tab to view the updated packages. CLI procedure List all errata for the host: Find the module stream an erratum belongs to: On the host, update the module stream: 10.10.2. Applying errata to hosts running Red Hat Enterprise Linux 8 Use this procedure to review and apply errata to a host running Red Hat Enterprise Linux 8. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Hosts > Content Hosts and select the host you want to apply errata to. Navigate to the Errata tab to see the list of errata. Select the errata to apply and click Apply Selected . In the confirmation window, click Apply . After the task to update all packages associated with the selected errata completes, click the Details tab to view the updated packages. CLI procedure List all errata for the host: Find the module stream an erratum belongs to: On the host, update the module stream: 10.10.3. Applying errata to hosts running Red Hat Enterprise Linux 7 Use this procedure to review and apply errata to a host running Red Hat Enterprise Linux 7. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Hosts > Content Hosts and select the host you want to apply errata to. Navigate to the Errata tab to see the list of errata. Select the errata to apply and click Apply Selected . In the confirmation window, click Apply . After the task to update all packages associated with the selected errata completes, click the Details tab to view the updated packages. CLI procedure List all errata for the host: Apply the most recent erratum to the host. Identify the erratum to apply using the erratum ID. Using Remote Execution 10.11. Applying errata to multiple hosts Use these procedures to review and apply errata to multiple RHEL hosts. Prerequisites Synchronize Red Hat Satellite repositories with the latest errata available from Red Hat. For more information, see Section 4.7, "Synchronizing repositories" . Register the hosts to an environment and content view on Satellite Server. For more information, see Registering Hosts in Managing hosts . Configure the host for remote execution. For more information about running remote execution jobs, see Configuring and Setting Up Remote Jobs in Managing hosts . Procedure In the Satellite web UI, navigate to Content > Content Types > Errata . Click the name of an erratum you want to apply. Click to Content Hosts tab. Select the hosts you want to apply errata to and click Apply to Hosts . Click Confirm . CLI procedure List all installable errata: Apply one of the errata to multiple hosts: Using Remote Execution The following Bash script applies an erratum to each host for which this erratum is available: for HOST in hammer --csv --csv-separator "|" host list --search "applicable_errata = ERRATUM_ID" --organization "Default Organization" | tail -n+2 | awk -F "|" '{ print USD2 }' ; do echo "== Applying to USDHOST ==" ; hammer host errata apply --host USDHOST --errata-ids ERRATUM_ID1,ERRATUM_ID2 ; done This command identifies all hosts with erratum_IDs as an applicable erratum and then applies the erratum to each host. To see if an erratum is applied successfully, find the corresponding task in the output of the following command: View the state of a selected task: 10.12. Applying errata to a host collection Using Remote Execution
|
[
"parameter operator value",
"type = security and package_name = kernel",
"hammer erratum list",
"hammer erratum info --id erratum_ID",
"hammer erratum list --product-id 7 --search \"bug = 1213000 or bug = 1207972\" --errata-restrict-applicable 1 --order \"type desc\"",
"hammer template create --file \"~/ My_Snippet \" --locations \" My_Location \" --name \" My_Template_Name_custom_pre\" \\ --organizations \"_My_Organization \" --type snippet",
"hammer content-view filter create --content-view \" My_Content_View \" --description \"Exclude errata items from the YYYY-MM-DD \" --name \" My_Filter_Name \" --organization \" My_Organization \" --type \"erratum\"",
"hammer content-view filter rule create --content-view \" My_Content_View \" --content-view-filter=\" My_Content_View_Filter \" --organization \" My_Organization \" --start-date \" YYYY-MM-DD \" --types=security,enhancement,bugfix",
"hammer content-view publish --name \" My_Content_View \" --organization \" My_Organization \"",
"hammer content-view version promote --content-view \" My_Content_View \" --organization \" My_Organization \" --to-lifecycle-environment \" My_Lifecycle_Environment \"",
"hammer erratum list",
"hammer content-view version list",
"hammer content-view version incremental-update --content-view-version-id 319 --errata-ids 34068b",
"hammer host errata list --host client.example.com",
"hammer erratum info --id ERRATUM_ID",
"dnf upgrade Module_Stream_Name",
"hammer host errata list --host client.example.com",
"hammer erratum info --id ERRATUM_ID",
"dnf upgrade Module_Stream_Name",
"hammer host errata list --host client.example.com",
"hammer job-invocation create --feature katello_errata_install --inputs errata= ERRATUM_ID1 , ERRATUM_ID2 --search-query \"name = client.example.com\"",
"hammer erratum list --errata-restrict-installable true --organization \" Default Organization \"",
"hammer job-invocation create --feature katello_errata_install --inputs errata= ERRATUM_ID --search-query \"applicable_errata = ERRATUM_ID \"",
"for HOST in hammer --csv --csv-separator \"|\" host list --search \"applicable_errata = ERRATUM_ID\" --organization \"Default Organization\" | tail -n+2 | awk -F \"|\" '{ print USD2 }' ; do echo \"== Applying to USDHOST ==\" ; hammer host errata apply --host USDHOST --errata-ids ERRATUM_ID1,ERRATUM_ID2 ; done",
"hammer task list",
"hammer task progress --id task_ID",
"hammer job-invocation create --feature katello_errata_install --inputs errata= ERRATUM_ID1 , ERRATUM_ID2 ,... --search-query \"host_collection = HOST_COLLECTION_NAME \""
] |
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/managing_content/managing_errata_content-management
|
Chapter 3. BareMetalHost [metal3.io/v1alpha1]
|
Chapter 3. BareMetalHost [metal3.io/v1alpha1] Description BareMetalHost is the Schema for the baremetalhosts API Type object 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object BareMetalHostSpec defines the desired state of BareMetalHost status object BareMetalHostStatus defines the observed state of BareMetalHost 3.1.1. .spec Description BareMetalHostSpec defines the desired state of BareMetalHost Type object Required online Property Type Description automatedCleaningMode string When set to disabled, automated cleaning will be avoided during provisioning and deprovisioning. bmc object How do we connect to the BMC? bootMACAddress string Which MAC address will PXE boot? This is optional for some types, but required for libvirt VMs driven by vbmc. bootMode string Select the method of initializing the hardware during boot. Defaults to UEFI. consumerRef object ConsumerRef can be used to store information about something that is using a host. When it is not empty, the host is considered "in use". customDeploy object A custom deploy procedure. description string Description is a human-entered text used to help identify the host externallyProvisioned boolean ExternallyProvisioned means something else is managing the image running on the host and the operator should only manage the power status and hardware inventory inspection. If the Image field is filled in, this field is ignored. firmware object BIOS configuration for bare metal server hardwareProfile string What is the name of the hardware profile for this host? It should only be necessary to set this when inspection cannot automatically determine the profile. image object Image holds the details of the image to be provisioned. metaData object MetaData holds the reference to the Secret containing host metadata (e.g. meta_data.json) which is passed to the Config Drive. networkData object NetworkData holds the reference to the Secret containing network configuration (e.g content of network_data.json) which is passed to the Config Drive. online boolean Should the server be online? preprovisioningNetworkDataName string PreprovisioningNetworkDataName is the name of the Secret in the local namespace containing network configuration (e.g content of network_data.json) which is passed to the preprovisioning image, and to the Config Drive if not overridden by specifying NetworkData. raid object RAID configuration for bare metal server rootDeviceHints object Provide guidance about how to choose the device for the image being provisioned. taints array Taints is the full, authoritative list of taints to apply to the corresponding Machine. This list will overwrite any modifications made to the Machine on an ongoing basis. taints[] object The node this Taint is attached to has the "effect" on any pod that does not tolerate the Taint. userData object UserData holds the reference to the Secret containing the user data to be passed to the host before it boots. 3.1.2. .spec.bmc Description How do we connect to the BMC? Type object Required address credentialsName Property Type Description address string Address holds the URL for accessing the controller on the network. credentialsName string The name of the secret containing the BMC credentials (requires keys "username" and "password"). disableCertificateVerification boolean DisableCertificateVerification disables verification of server certificates when using HTTPS to connect to the BMC. This is required when the server certificate is self-signed, but is insecure because it allows a man-in-the-middle to intercept the connection. 3.1.3. .spec.consumerRef Description ConsumerRef can be used to store information about something that is using a host. When it is not empty, the host is considered "in use". Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 3.1.4. .spec.customDeploy Description A custom deploy procedure. Type object Required method Property Type Description method string Custom deploy method name. This name is specific to the deploy ramdisk used. If you don't have a custom deploy ramdisk, you shouldn't use CustomDeploy. 3.1.5. .spec.firmware Description BIOS configuration for bare metal server Type object Property Type Description simultaneousMultithreadingEnabled boolean Allows a single physical processor core to appear as several logical processors. This supports following options: true, false. sriovEnabled boolean SR-IOV support enables a hypervisor to create virtual instances of a PCI-express device, potentially increasing performance. This supports following options: true, false. virtualizationEnabled boolean Supports the virtualization of platform hardware. This supports following options: true, false. 3.1.6. .spec.image Description Image holds the details of the image to be provisioned. Type object Required url Property Type Description checksum string Checksum is the checksum for the image. checksumType string ChecksumType is the checksum algorithm for the image. e.g md5, sha256, sha512 format string DiskFormat contains the format of the image (raw, qcow2, ... ). Needs to be set to raw for raw images streaming. Note live-iso means an iso referenced by the url will be live-booted and not deployed to disk, and in this case the checksum options are not required and if specified will be ignored. url string URL is a location of an image to deploy. 3.1.7. .spec.metaData Description MetaData holds the reference to the Secret containing host metadata (e.g. meta_data.json) which is passed to the Config Drive. Type object Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 3.1.8. .spec.networkData Description NetworkData holds the reference to the Secret containing network configuration (e.g content of network_data.json) which is passed to the Config Drive. Type object Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 3.1.9. .spec.raid Description RAID configuration for bare metal server Type object Property Type Description hardwareRAIDVolumes `` The list of logical disks for hardware RAID, if rootDeviceHints isn't used, first volume is root volume. You can set the value of this field to [] to clear all the hardware RAID configurations. softwareRAIDVolumes `` The list of logical disks for software RAID, if rootDeviceHints isn't used, first volume is root volume. If HardwareRAIDVolumes is set this item will be invalid. The number of created Software RAID devices must be 1 or 2. If there is only one Software RAID device, it has to be a RAID-1. If there are two, the first one has to be a RAID-1, while the RAID level for the second one can be 0, 1, or 1+0. As the first RAID device will be the deployment device, enforcing a RAID-1 reduces the risk of ending up with a non-booting node in case of a disk failure. Software RAID will always be deleted. 3.1.10. .spec.rootDeviceHints Description Provide guidance about how to choose the device for the image being provisioned. Type object Property Type Description deviceName string A Linux device name like "/dev/vda", or a by-path link to it like "/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0". The hint must match the actual value exactly. hctl string A SCSI bus address like 0:0:0:0. The hint must match the actual value exactly. minSizeGigabytes integer The minimum size of the device in Gigabytes. model string A vendor-specific device identifier. The hint can be a substring of the actual value. rotational boolean True if the device should use spinning media, false otherwise. serialNumber string Device serial number. The hint must match the actual value exactly. vendor string The name of the vendor or manufacturer of the device. The hint can be a substring of the actual value. wwn string Unique storage identifier. The hint must match the actual value exactly. wwnVendorExtension string Unique vendor storage identifier. The hint must match the actual value exactly. wwnWithExtension string Unique storage identifier with the vendor extension appended. The hint must match the actual value exactly. 3.1.11. .spec.taints Description Taints is the full, authoritative list of taints to apply to the corresponding Machine. This list will overwrite any modifications made to the Machine on an ongoing basis. Type array 3.1.12. .spec.taints[] Description The node this Taint is attached to has the "effect" on any pod that does not tolerate the Taint. Type object Required effect key Property Type Description effect string Required. The effect of the taint on pods that do not tolerate the taint. Valid effects are NoSchedule, PreferNoSchedule and NoExecute. key string Required. The taint key to be applied to a node. timeAdded string TimeAdded represents the time at which the taint was added. It is only written for NoExecute taints. value string The taint value corresponding to the taint key. 3.1.13. .spec.userData Description UserData holds the reference to the Secret containing the user data to be passed to the host before it boots. Type object Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 3.1.14. .status Description BareMetalHostStatus defines the observed state of BareMetalHost Type object Required errorCount errorMessage hardwareProfile operationalStatus poweredOn provisioning Property Type Description errorCount integer ErrorCount records how many times the host has encoutered an error since the last successful operation errorMessage string the last error message reported by the provisioning subsystem errorType string ErrorType indicates the type of failure encountered when the OperationalStatus is OperationalStatusError goodCredentials object the last credentials we were able to validate as working hardware object The hardware discovered to exist on the host. hardwareProfile string The name of the profile matching the hardware details. lastUpdated string LastUpdated identifies when this status was last observed. operationHistory object OperationHistory holds information about operations performed on this host. operationalStatus string OperationalStatus holds the status of the host poweredOn boolean indicator for whether or not the host is powered on provisioning object Information tracked by the provisioner. triedCredentials object the last credentials we sent to the provisioning backend 3.1.15. .status.goodCredentials Description the last credentials we were able to validate as working Type object Property Type Description credentials object SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace credentialsVersion string 3.1.16. .status.goodCredentials.credentials Description SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace Type object Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 3.1.17. .status.hardware Description The hardware discovered to exist on the host. Type object Property Type Description cpu object CPU describes one processor on the host. firmware object Firmware describes the firmware on the host. hostname string nics array nics[] object NIC describes one network interface on the host. ramMebibytes integer storage array storage[] object Storage describes one storage device (disk, SSD, etc.) on the host. systemVendor object HardwareSystemVendor stores details about the whole hardware system. 3.1.18. .status.hardware.cpu Description CPU describes one processor on the host. Type object Property Type Description arch string clockMegahertz number ClockSpeed is a clock speed in MHz count integer flags array (string) model string 3.1.19. .status.hardware.firmware Description Firmware describes the firmware on the host. Type object Property Type Description bios object The BIOS for this firmware 3.1.20. .status.hardware.firmware.bios Description The BIOS for this firmware Type object Property Type Description date string The release/build date for this BIOS vendor string The vendor name for this BIOS version string The version of the BIOS 3.1.21. .status.hardware.nics Description Type array 3.1.22. .status.hardware.nics[] Description NIC describes one network interface on the host. Type object Property Type Description ip string The IP address of the interface. This will be an IPv4 or IPv6 address if one is present. If both IPv4 and IPv6 addresses are present in a dual-stack environment, two nics will be output, one with each IP. mac string The device MAC address model string The vendor and product IDs of the NIC, e.g. "0x8086 0x1572" name string The name of the network interface, e.g. "en0" pxe boolean Whether the NIC is PXE Bootable speedGbps integer The speed of the device in Gigabits per second vlanId integer The untagged VLAN ID vlans array The VLANs available vlans[] object VLAN represents the name and ID of a VLAN 3.1.23. .status.hardware.nics[].vlans Description The VLANs available Type array 3.1.24. .status.hardware.nics[].vlans[] Description VLAN represents the name and ID of a VLAN Type object Property Type Description id integer VLANID is a 12-bit 802.1Q VLAN identifier name string 3.1.25. .status.hardware.storage Description Type array 3.1.26. .status.hardware.storage[] Description Storage describes one storage device (disk, SSD, etc.) on the host. Type object Property Type Description hctl string The SCSI location of the device model string Hardware model name string The Linux device name of the disk, e.g. "/dev/sda". Note that this may not be stable across reboots. rotational boolean Whether this disk represents rotational storage. This field is not recommended for usage, please prefer using 'Type' field instead, this field will be deprecated eventually. serialNumber string The serial number of the device sizeBytes integer The size of the disk in Bytes type string Device type, one of: HDD, SSD, NVME. vendor string The name of the vendor of the device wwn string The WWN of the device wwnVendorExtension string The WWN Vendor extension of the device wwnWithExtension string The WWN with the extension 3.1.27. .status.hardware.systemVendor Description HardwareSystemVendor stores details about the whole hardware system. Type object Property Type Description manufacturer string productName string serialNumber string 3.1.28. .status.operationHistory Description OperationHistory holds information about operations performed on this host. Type object Property Type Description deprovision object OperationMetric contains metadata about an operation (inspection, provisioning, etc.) used for tracking metrics. inspect object OperationMetric contains metadata about an operation (inspection, provisioning, etc.) used for tracking metrics. provision object OperationMetric contains metadata about an operation (inspection, provisioning, etc.) used for tracking metrics. register object OperationMetric contains metadata about an operation (inspection, provisioning, etc.) used for tracking metrics. 3.1.29. .status.operationHistory.deprovision Description OperationMetric contains metadata about an operation (inspection, provisioning, etc.) used for tracking metrics. Type object Property Type Description end `` start `` 3.1.30. .status.operationHistory.inspect Description OperationMetric contains metadata about an operation (inspection, provisioning, etc.) used for tracking metrics. Type object Property Type Description end `` start `` 3.1.31. .status.operationHistory.provision Description OperationMetric contains metadata about an operation (inspection, provisioning, etc.) used for tracking metrics. Type object Property Type Description end `` start `` 3.1.32. .status.operationHistory.register Description OperationMetric contains metadata about an operation (inspection, provisioning, etc.) used for tracking metrics. Type object Property Type Description end `` start `` 3.1.33. .status.provisioning Description Information tracked by the provisioner. Type object Required ID state Property Type Description ID string The machine's UUID from the underlying provisioning tool bootMode string BootMode indicates the boot mode used to provision the node customDeploy object Custom deploy procedure applied to the host. firmware object The Bios set by the user image object Image holds the details of the last image successfully provisioned to the host. raid object The Raid set by the user rootDeviceHints object The RootDevicehints set by the user state string An indiciator for what the provisioner is doing with the host. 3.1.34. .status.provisioning.customDeploy Description Custom deploy procedure applied to the host. Type object Required method Property Type Description method string Custom deploy method name. This name is specific to the deploy ramdisk used. If you don't have a custom deploy ramdisk, you shouldn't use CustomDeploy. 3.1.35. .status.provisioning.firmware Description The Bios set by the user Type object Property Type Description simultaneousMultithreadingEnabled boolean Allows a single physical processor core to appear as several logical processors. This supports following options: true, false. sriovEnabled boolean SR-IOV support enables a hypervisor to create virtual instances of a PCI-express device, potentially increasing performance. This supports following options: true, false. virtualizationEnabled boolean Supports the virtualization of platform hardware. This supports following options: true, false. 3.1.36. .status.provisioning.image Description Image holds the details of the last image successfully provisioned to the host. Type object Required url Property Type Description checksum string Checksum is the checksum for the image. checksumType string ChecksumType is the checksum algorithm for the image. e.g md5, sha256, sha512 format string DiskFormat contains the format of the image (raw, qcow2, ... ). Needs to be set to raw for raw images streaming. Note live-iso means an iso referenced by the url will be live-booted and not deployed to disk, and in this case the checksum options are not required and if specified will be ignored. url string URL is a location of an image to deploy. 3.1.37. .status.provisioning.raid Description The Raid set by the user Type object Property Type Description hardwareRAIDVolumes `` The list of logical disks for hardware RAID, if rootDeviceHints isn't used, first volume is root volume. You can set the value of this field to [] to clear all the hardware RAID configurations. softwareRAIDVolumes `` The list of logical disks for software RAID, if rootDeviceHints isn't used, first volume is root volume. If HardwareRAIDVolumes is set this item will be invalid. The number of created Software RAID devices must be 1 or 2. If there is only one Software RAID device, it has to be a RAID-1. If there are two, the first one has to be a RAID-1, while the RAID level for the second one can be 0, 1, or 1+0. As the first RAID device will be the deployment device, enforcing a RAID-1 reduces the risk of ending up with a non-booting node in case of a disk failure. Software RAID will always be deleted. 3.1.38. .status.provisioning.rootDeviceHints Description The RootDevicehints set by the user Type object Property Type Description deviceName string A Linux device name like "/dev/vda", or a by-path link to it like "/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0". The hint must match the actual value exactly. hctl string A SCSI bus address like 0:0:0:0. The hint must match the actual value exactly. minSizeGigabytes integer The minimum size of the device in Gigabytes. model string A vendor-specific device identifier. The hint can be a substring of the actual value. rotational boolean True if the device should use spinning media, false otherwise. serialNumber string Device serial number. The hint must match the actual value exactly. vendor string The name of the vendor or manufacturer of the device. The hint can be a substring of the actual value. wwn string Unique storage identifier. The hint must match the actual value exactly. wwnVendorExtension string Unique vendor storage identifier. The hint must match the actual value exactly. wwnWithExtension string Unique storage identifier with the vendor extension appended. The hint must match the actual value exactly. 3.1.39. .status.triedCredentials Description the last credentials we sent to the provisioning backend Type object Property Type Description credentials object SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace credentialsVersion string 3.1.40. .status.triedCredentials.credentials Description SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace Type object Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 3.2. API endpoints The following API endpoints are available: /apis/metal3.io/v1alpha1/baremetalhosts GET : list objects of kind BareMetalHost /apis/metal3.io/v1alpha1/namespaces/{namespace}/baremetalhosts DELETE : delete collection of BareMetalHost GET : list objects of kind BareMetalHost POST : create a BareMetalHost /apis/metal3.io/v1alpha1/namespaces/{namespace}/baremetalhosts/{name} DELETE : delete a BareMetalHost GET : read the specified BareMetalHost PATCH : partially update the specified BareMetalHost PUT : replace the specified BareMetalHost /apis/metal3.io/v1alpha1/namespaces/{namespace}/baremetalhosts/{name}/status GET : read status of the specified BareMetalHost PATCH : partially update status of the specified BareMetalHost PUT : replace status of the specified BareMetalHost 3.2.1. /apis/metal3.io/v1alpha1/baremetalhosts Table 3.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list objects of kind BareMetalHost Table 3.2. HTTP responses HTTP code Reponse body 200 - OK BareMetalHostList schema 401 - Unauthorized Empty 3.2.2. /apis/metal3.io/v1alpha1/namespaces/{namespace}/baremetalhosts Table 3.3. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 3.4. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of BareMetalHost Table 3.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 3.6. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind BareMetalHost Table 3.7. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 3.8. HTTP responses HTTP code Reponse body 200 - OK BareMetalHostList schema 401 - Unauthorized Empty HTTP method POST Description create a BareMetalHost Table 3.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.10. Body parameters Parameter Type Description body BareMetalHost schema Table 3.11. HTTP responses HTTP code Reponse body 200 - OK BareMetalHost schema 201 - Created BareMetalHost schema 202 - Accepted BareMetalHost schema 401 - Unauthorized Empty 3.2.3. /apis/metal3.io/v1alpha1/namespaces/{namespace}/baremetalhosts/{name} Table 3.12. Global path parameters Parameter Type Description name string name of the BareMetalHost namespace string object name and auth scope, such as for teams and projects Table 3.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a BareMetalHost Table 3.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 3.15. Body parameters Parameter Type Description body DeleteOptions schema Table 3.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified BareMetalHost Table 3.17. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 3.18. HTTP responses HTTP code Reponse body 200 - OK BareMetalHost schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified BareMetalHost Table 3.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.20. Body parameters Parameter Type Description body Patch schema Table 3.21. HTTP responses HTTP code Reponse body 200 - OK BareMetalHost schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified BareMetalHost Table 3.22. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.23. Body parameters Parameter Type Description body BareMetalHost schema Table 3.24. HTTP responses HTTP code Reponse body 200 - OK BareMetalHost schema 201 - Created BareMetalHost schema 401 - Unauthorized Empty 3.2.4. /apis/metal3.io/v1alpha1/namespaces/{namespace}/baremetalhosts/{name}/status Table 3.25. Global path parameters Parameter Type Description name string name of the BareMetalHost namespace string object name and auth scope, such as for teams and projects Table 3.26. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified BareMetalHost Table 3.27. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 3.28. HTTP responses HTTP code Reponse body 200 - OK BareMetalHost schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified BareMetalHost Table 3.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.30. Body parameters Parameter Type Description body Patch schema Table 3.31. HTTP responses HTTP code Reponse body 200 - OK BareMetalHost schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified BareMetalHost Table 3.32. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.33. Body parameters Parameter Type Description body BareMetalHost schema Table 3.34. HTTP responses HTTP code Reponse body 200 - OK BareMetalHost schema 201 - Created BareMetalHost schema 401 - Unauthorized Empty
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/provisioning_apis/baremetalhost-metal3-io-v1alpha1
|
Chapter 3. Recommended resource requirements for Red Hat Advanced Cluster Security for Kubernetes
|
Chapter 3. Recommended resource requirements for Red Hat Advanced Cluster Security for Kubernetes The recommended resource guidelines were developed by performing a focused test that created the following objects across a given number of namespaces: 10 deployments, with 3 pod replicas in a sleep state, mounting 4 secrets, 4 config maps 10 services, each one pointing to the TCP/8080 and TCP/8443 ports of one of the deployments 1 route pointing to the first of the services 10 secrets containing 2048 random string characters 10 config maps containing 2048 random string characters During the analysis of results, the number of deployments is identified as a primary factor for increasing of used resources. And we are using the number of deployments for the estimation of required resources. Additional resources Default resource requirements 3.1. Central services (self-managed) Note If you are using Red Hat Advanced Cluster Security Cloud Service (RHACS Cloud Service), you do not need to review the requirements for Central services, because they are managed by Red Hat. You only need to look at the requirements for secured cluster services. Central services contain the following components: Central Central DB Scanner Note For default resource requirements for the scanner, see the default resource requirements page. 3.1.1. Central Memory and CPU requirements The following table lists the minimum memory and CPU values required to run Central for one secured cluster. The table includes the number of concurrent web portal users. Deployments Concurrent web portal users CPU Memory < 25,000 1 user 2 cores 8 GiB < 25,000 < 5 users 2 cores 8 GiB < 50,000 1 user 2 cores 12 GiB < 50,000 < 5 users 6 cores 16 GiB 3.1.2. Central DB Memory and CPU requirements The following table lists the minimum memory and CPU values required to run Central DB for one secured cluster. The table includes the number of concurrent web portal users. Deployments Concurrent web portal users CPU Memory < 25,000 1 user 12 cores 32 GiB < 25,000 < 5 users 24 cores 32 GiB < 50,000 1 user 16 cores 32 GiB < 50,000 < 5 users 32 cores 32 GiB 3.1.3. Scanner StackRox Scanner Memory and CPU requirements The following table lists the minimum memory and CPU values required for the StackRox Scanner deployment in the Central cluster. The table includes the number of unique images deployed in all secured clusters. Unique Images Replicas CPU Memory < 100 1 replica 1 core 1.5 GiB < 500 1 replica 2 cores 2.5 GiB < 2000 2 replicas 2 cores 2.5 GiB < 5000 3 replicas 2 cores 2.5 GiB Additional resources Default resource requirements 3.2. Secured cluster services Secured cluster services contain the following components: Sensor Admission controller Collector Note Collector component is not included on this page. Required resource requirements are listed on the default resource requirements page. 3.2.1. Sensor Sensor monitors your Kubernetes and OpenShift Container Platform clusters. These services currently deploy in a single deployment, which handles interactions with the Kubernetes API and coordinates with Collector. Memory and CPU requirements The following table lists the minimum memory and CPU values required to run Sensor on a secured cluster. Deployments CPU Memory < 25,000 2 cores 10 GiB < 50,000 2 cores 20 GiB 3.2.2. Admission controller The admission controller prevents users from creating workloads that violate policies that you configure. Memory and CPU requirements The following table lists the minimum memory and CPU values required to run the admission controller on a secured cluster. Deployments CPU Memory < 25,000 0.5 cores 300 MiB < 50,000 0.5 cores 600 MiB
| null |
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/installing/acs-recommended-requirements
|
5.3.10. /proc/sysvipc/
|
5.3.10. /proc/sysvipc/ This directory contains information about System V IPC resources. The files in this directory relate to System V IPC calls for messages ( msg ), semaphores ( sem ), and shared memory ( shm ).
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-proc-dir-sysvipc
|
Chapter 4. Publishing applications with .NET 6.0
|
Chapter 4. Publishing applications with .NET 6.0 .NET 6.0 applications can be published to use a shared system-wide version of .NET or to include .NET. The following methods exist for publishing .NET 6.0 applications: Single-file application - The application is self-contained and can be deployed as a single executable with all dependent files contained in a single binary. Note Single-file application deployment is not available on IBM Z and LinuxONE. Framework-dependent deployment (FDD) - The application uses a shared system-wide version of .NET. Note When publishing an application for RHEL, Red Hat recommends using FDD, because it ensures that the application is using an up-to-date version of .NET, built by Red Hat, that uses a set of native dependencies. These native libraries are part of the rh-dotnet60 Software Collection. Self-contained deployment (SCD) - The application includes .NET. This method uses a runtime built by Microsoft. Running applications outside the rh-dotnet60 Software Collection may cause issues due to the unavailability of native libraries. Prerequisites Existing .NET application. For more information on how to create a .NET application, see Creating an application using .NET . 4.1. Publishing .NET applications The following procedure outlines how to publish a framework-dependent application. Procedure Publish the framework-dependent application: Replace my-app with the name of the application you want to publish. Optional: If the application is for RHEL only, trim out the dependencies needed for other platforms: Enable the Software Collection and pass the application to run the application on a RHEL system: You can add the scl enable rh-dotnet60 - dotnet <app>.dll command to a script that is published with the application. Add the following script to your project and update the variable: To include the script when publishing, add this ItemGroup to the csproj file:
|
[
"dotnet publish my-app -f net6.0 -c Release",
"dotnet restore my-app -r rhel.7-x64 dotnet publish my-app -f net6.0 -c Release -r rhel.7-x64 --self-contained false",
"scl enable rh-dotnet60 -- dotnet <app>.dll",
"#!/bin/bash APP=<app> SCL=rh-dotnet60 DIR=\"USD(dirname \"USD(readlink -f \"USD0\")\")\" scl enable USDSCL -- \"USDDIR/USDAPP\" \"USD@\"",
"<ItemGroup> <None Update=\"<scriptname>\" Condition=\"'USD(RuntimeIdentifier)' == 'rhel.7-x64' and 'USD(SelfContained)' == 'false'\" CopyToPublishDirectory=\"PreserveNewest\" /> </ItemGroup>"
] |
https://docs.redhat.com/en/documentation/net/6.0/html/getting_started_with_.net_on_rhel_7/assembly_publishing-apps-using-dotnet_getting-started-with-dotnet-on-rhel-7
|
Installing and managing Red Hat OpenStack Platform with director
|
Installing and managing Red Hat OpenStack Platform with director Red Hat OpenStack Platform 17.1 Using director to create and manage a Red Hat OpenStack Platform cloud OpenStack Documentation Team [email protected]
| null |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/installing_and_managing_red_hat_openstack_platform_with_director/index
|
Chapter 7. Managing alerts
|
Chapter 7. Managing alerts In OpenShift Dedicated 4, the Alerting UI enables you to manage alerts, silences, and alerting rules. Alerting rules . Alerting rules contain a set of conditions that outline a particular state within a cluster. Alerts are triggered when those conditions are true. An alerting rule can be assigned a severity that defines how the alerts are routed. Alerts . An alert is fired when the conditions defined in an alerting rule are true. Alerts provide a notification that a set of circumstances are apparent within an OpenShift Dedicated cluster. Silences . A silence can be applied to an alert to prevent notifications from being sent when the conditions for an alert are true. You can mute an alert after the initial notification, while you work on resolving the issue. Note The alerts, silences, and alerting rules that are available in the Alerting UI relate to the projects that you have access to. For example, if you are logged in as a user with the cluster-admin role, you can access all alerts, silences, and alerting rules. 7.1. Accessing the Alerting UI from the Administrator perspective The Alerting UI is accessible through the Administrator perspective of the OpenShift Dedicated web console. From the Administrator perspective, go to Observe Alerting . The three main pages in the Alerting UI in this perspective are the Alerts , Silences , and Alerting rules pages. 7.2. Accessing the Alerting UI from the Developer perspective The Alerting UI is accessible through the Developer perspective of the OpenShift Dedicated web console. From the Developer perspective, go to Observe and go to the Alerts tab. Select the project that you want to manage alerts for from the Project: list. In this perspective, alerts, silences, and alerting rules are all managed from the Alerts tab. The results shown in the Alerts tab are specific to the selected project. Note In the Developer perspective, you can select from core OpenShift Dedicated and user-defined projects that you have access to in the Project: <project_name> list. However, alerts, silences, and alerting rules relating to core OpenShift Dedicated projects are not displayed if you are not logged in as a cluster administrator. 7.3. Searching and filtering alerts, silences, and alerting rules You can filter the alerts, silences, and alerting rules that are displayed in the Alerting UI. This section provides a description of each of the available filtering options. 7.3.1. Understanding alert filters In the Administrator perspective, the Alerts page in the Alerting UI provides details about alerts relating to default OpenShift Dedicated and user-defined projects. The page includes a summary of severity, state, and source for each alert. The time at which an alert went into its current state is also shown. You can filter by alert state, severity, and source. By default, only Platform alerts that are Firing are displayed. The following describes each alert filtering option: State filters: Firing . The alert is firing because the alert condition is true and the optional for duration has passed. The alert continues to fire while the condition remains true. Pending . The alert is active but is waiting for the duration that is specified in the alerting rule before it fires. Silenced . The alert is now silenced for a defined time period. Silences temporarily mute alerts based on a set of label selectors that you define. Notifications are not sent for alerts that match all the listed values or regular expressions. Severity filters: Critical . The condition that triggered the alert could have a critical impact. The alert requires immediate attention when fired and is typically paged to an individual or to a critical response team. Warning . The alert provides a warning notification about something that might require attention to prevent a problem from occurring. Warnings are typically routed to a ticketing system for non-immediate review. Info . The alert is provided for informational purposes only. None . The alert has no defined severity. You can also create custom severity definitions for alerts relating to user-defined projects. Source filters: Platform . Platform-level alerts relate only to default OpenShift Dedicated projects. These projects provide core OpenShift Dedicated functionality. User . User alerts relate to user-defined projects. These alerts are user-created and are customizable. User-defined workload monitoring can be enabled postinstallation to provide observability into your own workloads. 7.3.2. Understanding silence filters In the Administrator perspective, the Silences page in the Alerting UI provides details about silences applied to alerts in default OpenShift Dedicated and user-defined projects. The page includes a summary of the state of each silence and the time at which a silence ends. You can filter by silence state. By default, only Active and Pending silences are displayed. The following describes each silence state filter option: State filters: Active . The silence is active and the alert will be muted until the silence is expired. Pending . The silence has been scheduled and it is not yet active. Expired . The silence has expired and notifications will be sent if the conditions for an alert are true. 7.3.3. Understanding alerting rule filters In the Administrator perspective, the Alerting rules page in the Alerting UI provides details about alerting rules relating to default OpenShift Dedicated and user-defined projects. The page includes a summary of the state, severity, and source for each alerting rule. You can filter alerting rules by alert state, severity, and source. By default, only Platform alerting rules are displayed. The following describes each alerting rule filtering option: Alert state filters: Firing . The alert is firing because the alert condition is true and the optional for duration has passed. The alert continues to fire while the condition remains true. Pending . The alert is active but is waiting for the duration that is specified in the alerting rule before it fires. Silenced . The alert is now silenced for a defined time period. Silences temporarily mute alerts based on a set of label selectors that you define. Notifications are not sent for alerts that match all the listed values or regular expressions. Not Firing . The alert is not firing. Severity filters: Critical . The conditions defined in the alerting rule could have a critical impact. When true, these conditions require immediate attention. Alerts relating to the rule are typically paged to an individual or to a critical response team. Warning . The conditions defined in the alerting rule might require attention to prevent a problem from occurring. Alerts relating to the rule are typically routed to a ticketing system for non-immediate review. Info . The alerting rule provides informational alerts only. None . The alerting rule has no defined severity. You can also create custom severity definitions for alerting rules relating to user-defined projects. Source filters: Platform . Platform-level alerting rules relate only to default OpenShift Dedicated projects. These projects provide core OpenShift Dedicated functionality. User . User-defined workload alerting rules relate to user-defined projects. These alerting rules are user-created and are customizable. User-defined workload monitoring can be enabled postinstallation to provide observability into your own workloads. 7.3.4. Searching and filtering alerts, silences, and alerting rules in the Developer perspective In the Developer perspective, the Alerts page in the Alerting UI provides a combined view of alerts and silences relating to the selected project. A link to the governing alerting rule is provided for each displayed alert. In this view, you can filter by alert state and severity. By default, all alerts in the selected project are displayed if you have permission to access the project. These filters are the same as those described for the Administrator perspective. 7.4. Getting information about alerts, silences, and alerting rules from the Administrator perspective The Alerting UI provides detailed information about alerts and their governing alerting rules and silences. Prerequisites You have access to the cluster as a user with view permissions for the project that you are viewing alerts for. Procedure To obtain information about alerts: From the Administrator perspective of the OpenShift Dedicated web console, go to the Observe Alerting Alerts page. Optional: Search for alerts by name by using the Name field in the search list. Optional: Filter alerts by state, severity, and source by selecting filters in the Filter list. Optional: Sort the alerts by clicking one or more of the Name , Severity , State , and Source column headers. Click the name of an alert to view its Alert details page. The page includes a graph that illustrates alert time series data. It also provides the following information about the alert: A description of the alert Messages associated with the alert Labels attached to the alert A link to its governing alerting rule Silences for the alert, if any exist To obtain information about silences: From the Administrator perspective of the OpenShift Dedicated web console, go to the Observe Alerting Silences page. Optional: Filter the silences by name using the Search by name field. Optional: Filter silences by state by selecting filters in the Filter list. By default, Active and Pending filters are applied. Optional: Sort the silences by clicking one or more of the Name , Firing alerts , State , and Creator column headers. Select the name of a silence to view its Silence details page. The page includes the following details: Alert specification Start time End time Silence state Number and list of firing alerts To obtain information about alerting rules: From the Administrator perspective of the OpenShift Dedicated web console, go to the Observe Alerting Alerting rules page. Optional: Filter alerting rules by state, severity, and source by selecting filters in the Filter list. Optional: Sort the alerting rules by clicking one or more of the Name , Severity , Alert state , and Source column headers. Select the name of an alerting rule to view its Alerting rule details page. The page provides the following details about the alerting rule: Alerting rule name, severity, and description. The expression that defines the condition for firing the alert. The time for which the condition should be true for an alert to fire. A graph for each alert governed by the alerting rule, showing the value with which the alert is firing. A table of all alerts governed by the alerting rule. 7.5. Getting information about alerts, silences, and alerting rules from the Developer perspective The Alerting UI provides detailed information about alerts and their governing alerting rules and silences. Prerequisites You have access to the cluster as a user with view permissions for the project that you are viewing alerts for. Procedure To obtain information about alerts, silences, and alerting rules: From the Developer perspective of the OpenShift Dedicated web console, go to the Observe <project_name> Alerts page. View details for an alert, silence, or an alerting rule: Alert details can be viewed by clicking a greater than symbol ( > ) to an alert name and then selecting the alert from the list. Silence details can be viewed by clicking a silence in the Silenced by section of the Alert details page. The Silence details page includes the following information: Alert specification Start time End time Silence state Number and list of firing alerts Alerting rule details can be viewed by clicking the menu to an alert in the Alerts page and then clicking View Alerting Rule . Note Only alerts, silences, and alerting rules relating to the selected project are displayed in the Developer perspective. Additional resources See the Cluster Monitoring Operator runbooks to help diagnose and resolve issues that trigger specific OpenShift Dedicated monitoring alerts. 7.6. Managing silences You can create a silence for an alert in the OpenShift Dedicated web console in both the Administrator and Developer perspectives. After you create a silence, you will not receive notifications about an alert when the alert fires. Creating silences is useful in scenarios where you have received an initial alert notification, and you do not want to receive further notifications during the time in which you resolve the underlying issue causing the alert to fire. When creating a silence, you must specify whether it becomes active immediately or at a later time. You must also set a duration period after which the silence expires. After you create silences, you can view, edit, and expire them. Note When you create silences, they are replicated across Alertmanager pods. However, if you do not configure persistent storage for Alertmanager, silences might be lost. This can happen, for example, if all Alertmanager pods restart at the same time. Additional resources Configuring persistent storage 7.6.1. Silencing alerts from the Administrator perspective You can silence a specific alert or silence alerts that match a specification that you define. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure To silence a specific alert: From the Administrator perspective of the OpenShift Dedicated web console, go to Observe Alerting Alerts . For the alert that you want to silence, click and select Silence alert to open the Silence alert page with a default configuration for the chosen alert. Optional: Change the default configuration details for the silence. Note You must add a comment before saving a silence. To save the silence, click Silence . To silence a set of alerts: From the Administrator perspective of the OpenShift Dedicated web console, go to Observe Alerting Silences . Click Create silence . On the Create silence page, set the schedule, duration, and label details for an alert. Note You must add a comment before saving a silence. To create silences for alerts that match the labels that you entered, click Silence . 7.6.2. Silencing alerts from the Developer perspective You can silence a specific alert or silence alerts that match a specification that you define. Prerequisites If you are a cluster administrator, you have access to the cluster as a user with the dedicated-admin role. If you are a non-administrator user, you have access to the cluster as a user with the following user roles: The cluster-monitoring-view cluster role, which allows you to access Alertmanager. The monitoring-alertmanager-edit role, which permits you to create and silence alerts in the Administrator perspective in the web console. The monitoring-rules-edit cluster role, which permits you to create and silence alerts in the Developer perspective in the web console. Procedure To silence a specific alert: From the Developer perspective of the OpenShift Dedicated web console, go to Observe and go to the Alerts tab. Select the project that you want to silence an alert for from the Project: list. If necessary, expand the details for the alert by clicking a greater than symbol ( > ) to the alert name. Click the alert message in the expanded view to open the Alert details page for the alert. Click Silence alert to open the Silence alert page with a default configuration for the alert. Optional: Change the default configuration details for the silence. Note You must add a comment before saving a silence. To save the silence, click Silence . To silence a set of alerts: From the Developer perspective of the OpenShift Dedicated web console, go to Observe and go to the Silences tab. Select the project that you want to silence alerts for from the Project: list. Click Create silence . On the Create silence page, set the duration and label details for an alert. Note You must add a comment before saving a silence. To create silences for alerts that match the labels that you entered, click Silence . 7.6.3. Editing silences from the Administrator perspective You can edit a silence, which expires the existing silence and creates a new one with the changed configuration. Prerequisites If you are a cluster administrator, you have access to the cluster as a user with the dedicated-admin role. If you are a non-administrator user, you have access to the cluster as a user with the following user roles: The cluster-monitoring-view cluster role, which allows you to access Alertmanager. The monitoring-alertmanager-edit role, which permits you to create and silence alerts in the Administrator perspective in the web console. Procedure From the Administrator perspective of the OpenShift Dedicated web console, go to Observe Alerting Silences . For the silence you want to modify, click and select Edit silence . Alternatively, you can click Actions and select Edit silence on the Silence details page for a silence. On the Edit silence page, make changes and click Silence . Doing so expires the existing silence and creates one with the updated configuration. 7.6.4. Editing silences from the Developer perspective You can edit a silence, which expires the existing silence and creates a new one with the changed configuration. Prerequisites If you are a cluster administrator, you have access to the cluster as a user with the dedicated-admin role. If you are a non-administrator user, you have access to the cluster as a user with the following user roles: The cluster-monitoring-view cluster role, which allows you to access Alertmanager. The monitoring-rules-edit cluster role, which permits you to create and silence alerts in the Developer perspective in the web console. Procedure From the Developer perspective of the OpenShift Dedicated web console, go to Observe and go to the Silences tab. Select the project that you want to edit silences for from the Project: list. For the silence you want to modify, click and select Edit silence . Alternatively, you can click Actions and select Edit silence on the Silence details page for a silence. On the Edit silence page, make changes and click Silence . Doing so expires the existing silence and creates one with the updated configuration. 7.6.5. Expiring silences from the Administrator perspective You can expire a single silence or multiple silences. Expiring a silence deactivates it permanently. Note You cannot delete expired, silenced alerts. Expired silences older than 120 hours are garbage collected. Prerequisites If you are a cluster administrator, you have access to the cluster as a user with the dedicated-admin role. If you are a non-administrator user, you have access to the cluster as a user with the following user roles: The cluster-monitoring-view cluster role, which allows you to access Alertmanager. The monitoring-alertmanager-edit role, which permits you to create and silence alerts in the Administrator perspective in the web console. Procedure Go to Observe Alerting Silences . For the silence or silences you want to expire, select the checkbox in the corresponding row. Click Expire 1 silence to expire a single selected silence or Expire <n> silences to expire multiple selected silences, where <n> is the number of silences you selected. Alternatively, to expire a single silence you can click Actions and select Expire silence on the Silence details page for a silence. 7.6.6. Expiring silences from the Developer perspective You can expire a single silence or multiple silences. Expiring a silence deactivates it permanently. Note You cannot delete expired, silenced alerts. Expired silences older than 120 hours are garbage collected. Prerequisites If you are a cluster administrator, you have access to the cluster as a user with the dedicated-admin role. If you are a non-administrator user, you have access to the cluster as a user with the following user roles: The cluster-monitoring-view cluster role, which allows you to access Alertmanager. The monitoring-rules-edit cluster role, which permits you to create and silence alerts in the Developer perspective in the web console. Procedure From the Developer perspective of the OpenShift Dedicated web console, go to Observe and go to the Silences tab. Select the project that you want to expire a silence for from the Project: list. For the silence or silences you want to expire, select the checkbox in the corresponding row. Click Expire 1 silence to expire a single selected silence or Expire <n> silences to expire multiple selected silences, where <n> is the number of silences you selected. Alternatively, to expire a single silence you can click Actions and select Expire silence on the Silence details page for a silence. 7.7. Creating alerting rules for user-defined projects In OpenShift Dedicated, you can create alerting rules for user-defined projects. Those alerting rules will trigger alerts based on the values of the chosen metrics. If you create alerting rules for a user-defined project, consider the following key behaviors and important limitations when you define the new rules: A user-defined alerting rule can include metrics exposed by its own project in addition to the default metrics from core platform monitoring. You cannot include metrics from another user-defined project. For example, an alerting rule for the ns1 user-defined project can use metrics exposed by the ns1 project in addition to core platform metrics, such as CPU and memory metrics. However, the rule cannot include metrics from a different ns2 user-defined project. By default, when you create an alerting rule, the namespace label is enforced on it even if a rule with the same name exists in another project. To create alerting rules that are not bound to their project of origin, see "Creating cross-project alerting rules for user-defined projects". To reduce latency and to minimize the load on core platform monitoring components, you can add the openshift.io/prometheus-rule-evaluation-scope: leaf-prometheus label to a rule. This label forces only the Prometheus instance deployed in the openshift-user-workload-monitoring project to evaluate the alerting rule and prevents the Thanos Ruler instance from doing so. Important If an alerting rule has this label, your alerting rule can use only those metrics exposed by your user-defined project. Alerting rules you create based on default platform metrics might not trigger alerts. 7.7.1. Optimizing alerting for user-defined projects You can optimize alerting for your own projects by considering the following recommendations when creating alerting rules: Minimize the number of alerting rules that you create for your project . Create alerting rules that notify you of conditions that impact you. It is more difficult to notice relevant alerts if you generate many alerts for conditions that do not impact you. Create alerting rules for symptoms instead of causes . Create alerting rules that notify you of conditions regardless of the underlying cause. The cause can then be investigated. You will need many more alerting rules if each relates only to a specific cause. Some causes are then likely to be missed. Plan before you write your alerting rules . Determine what symptoms are important to you and what actions you want to take if they occur. Then build an alerting rule for each symptom. Provide clear alert messaging . State the symptom and recommended actions in the alert message. Include severity levels in your alerting rules . The severity of an alert depends on how you need to react if the reported symptom occurs. For example, a critical alert should be triggered if a symptom requires immediate attention by an individual or a critical response team. 7.7.2. Creating alerting rules for user-defined projects You can create alerting rules for user-defined projects. Those alerting rules will trigger alerts based on the values of the chosen metrics. Note To help users understand the impact and cause of the alert, ensure that your alerting rule contains an alert message and severity value. Prerequisites You have enabled monitoring for user-defined projects. You are logged in as a cluster administrator or as a user that has the monitoring-rules-edit cluster role for the project where you want to create an alerting rule. You have installed the OpenShift CLI ( oc ). Procedure Create a YAML file for alerting rules. In this example, it is called example-app-alerting-rule.yaml . Add an alerting rule configuration to the YAML file. The following example creates a new alerting rule named example-alert . The alerting rule fires an alert when the version metric exposed by the sample service becomes 0 : apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: example-alert namespace: ns1 spec: groups: - name: example rules: - alert: VersionAlert 1 for: 1m 2 expr: version{job="prometheus-example-app"} == 0 3 labels: severity: warning 4 annotations: message: This is an example alert. 5 1 The name of the alerting rule you want to create. 2 The duration for which the condition should be true before an alert is fired. 3 The PromQL query expression that defines the new rule. 4 The severity that alerting rule assigns to the alert. 5 The message associated with the alert. Apply the configuration file to the cluster: USD oc apply -f example-app-alerting-rule.yaml 7.7.3. Creating cross-project alerting rules for user-defined projects You can create alerting rules for user-defined projects that are not bound to their project of origin by configuring a project in the user-workload-monitoring-config config map. This allows you to create generic alerting rules that get applied to multiple user-defined projects instead of having individual PrometheusRule objects in each user project. Prerequisites You have access to the cluster as a user with the dedicated-admin role. Note If you are a non-administrator user, you can still create cross-project alerting rules if you have the monitoring-rules-edit cluster role for the project where you want to create an alerting rule. However, that project needs to be configured in the user-workload-monitoring-config config map under the namespacesWithoutLabelEnforcement property, which can be done only by cluster administrators. The user-workload-monitoring-config ConfigMap object exists. This object is created by default when the cluster is created. You have installed the OpenShift CLI ( oc ). Procedure Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Configure projects in which you want to create alerting rules that are not bound to a specific project: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | namespacesWithoutLabelEnforcement: [ <namespace> ] 1 # ... 1 Specify one or more projects in which you want to create cross-project alerting rules. Prometheus and Thanos Ruler for user-defined monitoring do not enforce the namespace label in PrometheusRule objects created in these projects. Create a YAML file for alerting rules. In this example, it is called example-cross-project-alerting-rule.yaml . Add an alerting rule configuration to the YAML file. The following example creates a new cross-project alerting rule called example-security . The alerting rule fires when a user project does not enforce the restricted pod security policy: Example cross-project alerting rule apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: example-security namespace: ns1 1 spec: groups: - name: pod-security-policy rules: - alert: "ProjectNotEnforcingRestrictedPolicy" 2 for: 5m 3 expr: kube_namespace_labels{namespace!~"(openshift|kube).*|default",label_pod_security_kubernetes_io_enforce!="restricted"} 4 annotations: message: "Restricted policy not enforced. Project {{ USDlabels.namespace }} does not enforce the restricted pod security policy." 5 labels: severity: warning 6 1 Ensure that you specify the project that you defined in the namespacesWithoutLabelEnforcement field. 2 The name of the alerting rule you want to create. 3 The duration for which the condition should be true before an alert is fired. 4 The PromQL query expression that defines the new rule. 5 The message associated with the alert. 6 The severity that alerting rule assigns to the alert. Important Ensure that you create a specific cross-project alerting rule in only one of the projects that you specified in the namespacesWithoutLabelEnforcement field. If you create the same cross-project alerting rule in multiple projects, it results in repeated alerts. Apply the configuration file to the cluster: USD oc apply -f example-cross-project-alerting-rule.yaml Additional resources Prometheus alerting documentation Monitoring overview 7.8. Managing alerting rules for user-defined projects In OpenShift Dedicated, you can view, edit, and remove alerting rules in user-defined projects. Important Managing alerting rules for user-defined projects is only available in OpenShift Dedicated version 4.11 and later. Alerting rule considerations The default alerting rules are used specifically for the OpenShift Dedicated cluster. Some alerting rules intentionally have identical names. They send alerts about the same event with different thresholds, different severity, or both. Inhibition rules prevent notifications for lower severity alerts that are firing when a higher severity alert is also firing. 7.8.1. Accessing alerting rules for user-defined projects To list alerting rules for a user-defined project, you must have been assigned the monitoring-rules-view cluster role for the project. Prerequisites You have enabled monitoring for user-defined projects. You are logged in as a user that has the monitoring-rules-view cluster role for your project. You have installed the OpenShift CLI ( oc ). Procedure To list alerting rules in <project> : USD oc -n <project> get prometheusrule To list the configuration of an alerting rule, run the following: USD oc -n <project> get prometheusrule <rule> -o yaml 7.8.2. Listing alerting rules for all projects in a single view As a dedicated-admin , you can list alerting rules for core OpenShift Dedicated and user-defined projects together in a single view. Prerequisites You have access to the cluster as a user with the dedicated-admin role. You have installed the OpenShift CLI ( oc ). Procedure From the Administrator perspective of the OpenShift Dedicated web console, go to Observe Alerting Alerting rules . Select the Platform and User sources in the Filter drop-down menu. Note The Platform source is selected by default. 7.8.3. Removing alerting rules for user-defined projects You can remove alerting rules for user-defined projects. Prerequisites You have enabled monitoring for user-defined projects. You are logged in as a cluster administrator or as a user that has the monitoring-rules-edit cluster role for the project where you want to create an alerting rule. You have installed the OpenShift CLI ( oc ). Procedure To remove rule <foo> in <namespace> , run the following: USD oc -n <namespace> delete prometheusrule <foo> 7.8.4. Disabling cross-project alerting rules for user-defined projects Creating cross-project alerting rules for user-defined projects is enabled by default. Cluster administrators can disable the capability in the cluster-monitoring-config config map for the following reasons: To prevent user-defined monitoring from overloading the cluster monitoring stack. To prevent buggy alerting rules from being applied to the cluster without having to identify the rule that causes the issue. Prerequisites You have access to the cluster as a user with the dedicated-admin role. You have installed the OpenShift CLI ( oc ). Procedure Edit the cluster-monitoring-config config map in the openshift-monitoring project: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config In the cluster-monitoring-config config map, disable the option to create cross-project alerting rules by setting the rulesWithoutLabelEnforcementAllowed value under data/config.yaml/userWorkload to false : kind: ConfigMap apiVersion: v1 metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | userWorkload: rulesWithoutLabelEnforcementAllowed: false # ... Save the file to apply the changes. Additional resources Alertmanager documentation 7.9. Sending notifications to external systems In OpenShift Dedicated 4, firing alerts can be viewed in the Alerting UI. Alerts are not configured by default to be sent to any notification systems. You can configure OpenShift Dedicated to send alerts to the following receiver types: PagerDuty Webhook Email Slack Microsoft Teams Routing alerts to receivers enables you to send timely notifications to the appropriate teams when failures occur. For example, critical alerts require immediate attention and are typically paged to an individual or a critical response team. Alerts that provide non-critical warning notifications might instead be routed to a ticketing system for non-immediate review. Checking that alerting is operational by using the watchdog alert OpenShift Dedicated monitoring includes a watchdog alert that fires continuously. Alertmanager repeatedly sends watchdog alert notifications to configured notification providers. The provider is usually configured to notify an administrator when it stops receiving the watchdog alert. This mechanism helps you quickly identify any communication issues between Alertmanager and the notification provider. 7.9.1. Configuring different alert receivers for default platform alerts and user-defined alerts You can configure different alert receivers for default platform alerts and user-defined alerts to ensure the following results: All default platform alerts are sent to a receiver owned by the team in charge of these alerts. All user-defined alerts are sent to another receiver so that the team can focus only on platform alerts. You can achieve this by using the openshift_io_alert_source="platform" label that is added by the Cluster Monitoring Operator to all platform alerts: Use the openshift_io_alert_source="platform" matcher to match default platform alerts. Use the openshift_io_alert_source!="platform" or 'openshift_io_alert_source=""' matcher to match user-defined alerts. Note This configuration does not apply if you have enabled a separate instance of Alertmanager dedicated to user-defined alerts. 7.9.2. Configuring alert routing for user-defined projects If you are a non-administrator user who has been given the alert-routing-edit cluster role, you can create or edit alert routing for user-defined projects. Prerequisites Alert routing has been enabled for user-defined projects. You are logged in as a user that has the alert-routing-edit cluster role for the project for which you want to create alert routing. You have installed the OpenShift CLI ( oc ). Procedure Create a YAML file for alert routing. The example in this procedure uses a file called example-app-alert-routing.yaml . Add an AlertmanagerConfig YAML definition to the file. For example: apiVersion: monitoring.coreos.com/v1beta1 kind: AlertmanagerConfig metadata: name: example-routing namespace: ns1 spec: route: receiver: default groupBy: [job] receivers: - name: default webhookConfigs: - url: https://example.org/post Save the file. Apply the resource to the cluster: USD oc apply -f example-app-alert-routing.yaml The configuration is automatically applied to the Alertmanager pods. 7.10. Configuring Alertmanager to send notifications You can configure Alertmanager to send notifications by editing the alertmanager-user-workload secret for user-defined alerts. Note All features of a supported version of upstream Alertmanager are also supported in an OpenShift Alertmanager configuration. To check all the configuration options of a supported version of upstream Alertmanager, see Alertmanager configuration . 7.10.1. Configuring alert routing for user-defined projects with the Alertmanager secret If you have enabled a separate instance of Alertmanager that is dedicated to user-defined alert routing, you can customize where and how the instance sends notifications by editing the alertmanager-user-workload secret in the openshift-user-workload-monitoring namespace. Note All features of a supported version of upstream Alertmanager are also supported in an OpenShift Dedicated Alertmanager configuration. To check all the configuration options of a supported version of upstream Alertmanager, see Alertmanager configuration (Prometheus documentation). Prerequisites You have access to the cluster as a user with the dedicated-admin role. You have installed the OpenShift CLI ( oc ). Procedure Print the currently active Alertmanager configuration into the file alertmanager.yaml : USD oc -n openshift-user-workload-monitoring get secret alertmanager-user-workload --template='{{ index .data "alertmanager.yaml" }}' | base64 --decode > alertmanager.yaml Edit the configuration in alertmanager.yaml : global: http_config: proxy_from_environment: true 1 route: receiver: Default group_by: - name: Default routes: - matchers: - "service = prometheus-example-monitor" 2 receiver: <receiver> 3 receivers: - name: Default - name: <receiver> <receiver_configuration> 4 1 If you configured an HTTP cluster-wide proxy, set the proxy_from_environment parameter to true to enable proxying for all alert receivers. 2 Specify labels to match your alerts. This example targets all alerts that have the service="prometheus-example-monitor" label. 3 Specify the name of the receiver to use for the alerts group. 4 Specify the receiver configuration. Apply the new configuration in the file: USD oc -n openshift-user-workload-monitoring create secret generic alertmanager-user-workload --from-file=alertmanager.yaml --dry-run=client -o=yaml | oc -n openshift-user-workload-monitoring replace secret --filename=- 7.11. Additional resources PagerDuty official site PagerDuty Prometheus Integration Guide Support version matrix for monitoring components
|
[
"apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: example-alert namespace: ns1 spec: groups: - name: example rules: - alert: VersionAlert 1 for: 1m 2 expr: version{job=\"prometheus-example-app\"} == 0 3 labels: severity: warning 4 annotations: message: This is an example alert. 5",
"oc apply -f example-app-alerting-rule.yaml",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | namespacesWithoutLabelEnforcement: [ <namespace> ] 1 #",
"apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: example-security namespace: ns1 1 spec: groups: - name: pod-security-policy rules: - alert: \"ProjectNotEnforcingRestrictedPolicy\" 2 for: 5m 3 expr: kube_namespace_labels{namespace!~\"(openshift|kube).*|default\",label_pod_security_kubernetes_io_enforce!=\"restricted\"} 4 annotations: message: \"Restricted policy not enforced. Project {{ USDlabels.namespace }} does not enforce the restricted pod security policy.\" 5 labels: severity: warning 6",
"oc apply -f example-cross-project-alerting-rule.yaml",
"oc -n <project> get prometheusrule",
"oc -n <project> get prometheusrule <rule> -o yaml",
"oc -n <namespace> delete prometheusrule <foo>",
"oc -n openshift-monitoring edit configmap cluster-monitoring-config",
"kind: ConfigMap apiVersion: v1 metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | userWorkload: rulesWithoutLabelEnforcementAllowed: false #",
"apiVersion: monitoring.coreos.com/v1beta1 kind: AlertmanagerConfig metadata: name: example-routing namespace: ns1 spec: route: receiver: default groupBy: [job] receivers: - name: default webhookConfigs: - url: https://example.org/post",
"oc apply -f example-app-alert-routing.yaml",
"oc -n openshift-user-workload-monitoring get secret alertmanager-user-workload --template='{{ index .data \"alertmanager.yaml\" }}' | base64 --decode > alertmanager.yaml",
"global: http_config: proxy_from_environment: true 1 route: receiver: Default group_by: - name: Default routes: - matchers: - \"service = prometheus-example-monitor\" 2 receiver: <receiver> 3 receivers: - name: Default - name: <receiver> <receiver_configuration> 4",
"oc -n openshift-user-workload-monitoring create secret generic alertmanager-user-workload --from-file=alertmanager.yaml --dry-run=client -o=yaml | oc -n openshift-user-workload-monitoring replace secret --filename=-"
] |
https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/monitoring/managing-alerts
|
Chapter 19. Red Hat Enterprise Linux 7.5 for ARM
|
Chapter 19. Red Hat Enterprise Linux 7.5 for ARM Red Hat Enterprise Linux 7.5 for ARM introduces Red Hat Enterprise Linux 7.5 user space with an updated kernel, which is based on version 4.14 and is provided by the kernel-alt packages. The offering is distributed with other updated packages but most of the packages are standard Red Hat Enterprise Linux 7 Server RPMs. Installation ISO images are available on the Customer Portal Downloads page . For information about Red Hat Enterprise Linux 7.5 user space, see the Red Hat Enterprise Linux 7 documentation . For information regarding the version, refer to Red Hat Enterprise Linux 7.4 for ARM - Release Notes . The following packages are provided as Development Preview in this release: libvirt (Optional channel) qemu-kvm-ma (Optional channel) Note KVM virtualization is a Development Preview on the 64-bit ARM architecture, and thus is not supported by Red Hat. For more information, see the Virtualization Deployment and Administration Guide . Customers may contact Red Hat and describe their use case, which will be taken into consideration for a future release of Red Hat Enterprise Linux. 19.1. New Features and Updates Core Kernel This update introduces the qrwlock queue write lock for 64-bit ARM systems. The implementation of this mechanism improves performance and prevents lock starvation by ensuring fair handling of multiple CPUs competing for the global task lock. This change also resolves a known issue, which was present in earlier releases and which caused soft lockups under heavy load. Note that any kernel modules built for versions of Red Hat Enterprise Linux 7 for ARM (against the kernel-alt packages) must be rebuilt against the updated kernel. (BZ#1507568) Security USBGuard is now fully supported on 64-bit ARM systems The USBGuard software framework provides system protection against intrusive USB devices by implementing basic whitelisting and blacklisting capabilities based on device attributes. Using USBGuard on 64-bit ARM systems, previously available as a Technology Preview, is now fully supported.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.5_release_notes/chap-Red_Hat_Enterprise_Linux-7.5_Release_Notes-RHEL_for_ARM
|
Chapter 3. Configuring the Squid caching proxy server
|
Chapter 3. Configuring the Squid caching proxy server Squid is a proxy server that caches content to reduce bandwidth and load web pages more quickly. This chapter describes how to set up Squid as a proxy for the HTTP, HTTPS, and FTP protocol, as well as authentication and restricting access. 3.1. Setting up Squid as a caching proxy without authentication You can configure Squid as a caching proxy without authentication. The procedure limits access to the proxy based on IP ranges. Prerequisites The procedure assumes that the /etc/squid/squid.conf file is as provided by the squid package. If you edited this file before, remove the file and reinstall the package. Procedure Install the squid package: Edit the /etc/squid/squid.conf file: Adapt the localnet access control lists (ACL) to match the IP ranges that should be allowed to use the proxy: By default, the /etc/squid/squid.conf file contains the http_access allow localnet rule that allows using the proxy from all IP ranges specified in localnet ACLs. Note that you must specify all localnet ACLs before the http_access allow localnet rule. Important Remove all existing acl localnet entries that do not match your environment. The following ACL exists in the default configuration and defines 443 as a port that uses the HTTPS protocol: If users should be able to use the HTTPS protocol also on other ports, add an ACL for each of these port: Update the list of acl Safe_ports rules to configure to which ports Squid can establish a connection. For example, to configure that clients using the proxy can only access resources on port 21 (FTP), 80 (HTTP), and 443 (HTTPS), keep only the following acl Safe_ports statements in the configuration: By default, the configuration contains the http_access deny !Safe_ports rule that defines access denial to ports that are not defined in Safe_ports ACLs. Configure the cache type, the path to the cache directory, the cache size, and further cache type-specific settings in the cache_dir parameter: With these settings: Squid uses the ufs cache type. Squid stores its cache in the /var/spool/squid/ directory. The cache grows up to 10000 MB. Squid creates 16 level-1 sub-directories in the /var/spool/squid/ directory. Squid creates 256 sub-directories in each level-1 directory. If you do not set a cache_dir directive, Squid stores the cache in memory. If you set a different cache directory than /var/spool/squid/ in the cache_dir parameter: Create the cache directory: Configure the permissions for the cache directory: If you run SELinux in enforcing mode, set the squid_cache_t context for the cache directory: If the semanage utility is not available on your system, install the policycoreutils-python-utils package. Open the 3128 port in the firewall: Enable and start the squid service: Verification To verify that the proxy works correctly, download a web page using the curl utility: If curl does not display any error and the index.html file was downloaded to the current directory, the proxy works. 3.2. Setting up Squid as a caching proxy with LDAP authentication You can configure Squid as a caching proxy that uses LDAP to authenticate users. The procedure configures that only authenticated users can use the proxy. Prerequisites The procedure assumes that the /etc/squid/squid.conf file is as provided by the squid package. If you edited this file before, remove the file and reinstall the package. An service user, such as uid=proxy_user,cn=users,cn=accounts,dc=example,dc=com exists in the LDAP directory. Squid uses this account only to search for the authenticating user. If the authenticating user exists, Squid binds as this user to the directory to verify the authentication. Procedure Install the squid package: Edit the /etc/squid/squid.conf file: To configure the basic_ldap_auth helper utility, add the following configuration entry to the top of /etc/squid/squid.conf : The following describes the parameters passed to the basic_ldap_auth helper utility in the example above: -b base_DN sets the LDAP search base. -D proxy_service_user_DN sets the distinguished name (DN) of the account Squid uses to search for the authenticating user in the directory. -W path_to_password_file sets the path to the file that contains the password of the proxy service user. Using a password file prevents that the password is visible in the operating system's process list. -f LDAP_filter specifies the LDAP search filter. Squid replaces the %s variable with the user name provided by the authenticating user. The (&(objectClass=person)(uid=%s)) filter in the example defines that the user name must match the value set in the uid attribute and that the directory entry contains the person object class. -ZZ enforces a TLS-encrypted connection over the LDAP protocol using the STARTTLS command. Omit the -ZZ in the following situations: The LDAP server does not support encrypted connections. The port specified in the URL uses the LDAPS protocol. The -H LDAP_URL parameter specifies the protocol, the host name or IP address, and the port of the LDAP server in URL format. Add the following ACL and rule to configure that Squid allows only authenticated users to use the proxy: Important Specify these settings before the http_access deny all rule. Remove the following rule to disable bypassing the proxy authentication from IP ranges specified in localnet ACLs: The following ACL exists in the default configuration and defines 443 as a port that uses the HTTPS protocol: If users should be able to use the HTTPS protocol also on other ports, add an ACL for each of these port: Update the list of acl Safe_ports rules to configure to which ports Squid can establish a connection. For example, to configure that clients using the proxy can only access resources on port 21 (FTP), 80 (HTTP), and 443 (HTTPS), keep only the following acl Safe_ports statements in the configuration: By default, the configuration contains the http_access deny !Safe_ports rule that defines access denial to ports that are not defined in Safe_ports ACLs . Configure the cache type, the path to the cache directory, the cache size, and further cache type-specific settings in the cache_dir parameter: With these settings: Squid uses the ufs cache type. Squid stores its cache in the /var/spool/squid/ directory. The cache grows up to 10000 MB. Squid creates 16 level-1 sub-directories in the /var/spool/squid/ directory. Squid creates 256 sub-directories in each level-1 directory. If you do not set a cache_dir directive, Squid stores the cache in memory. If you set a different cache directory than /var/spool/squid/ in the cache_dir parameter: Create the cache directory: Configure the permissions for the cache directory: If you run SELinux in enforcing mode, set the squid_cache_t context for the cache directory: If the semanage utility is not available on your system, install the policycoreutils-python-utils package. Store the password of the LDAP service user in the /etc/squid/ldap_password file, and set appropriate permissions for the file: Open the 3128 port in the firewall: Enable and start the squid service: Verification To verify that the proxy works correctly, download a web page using the curl utility: If curl does not display any error and the index.html file was downloaded to the current directory, the proxy works. Troubleshooting steps To verify that the helper utility works correctly: Manually start the helper utility with the same settings you used in the auth_param parameter: Enter a valid user name and password, and press Enter : If the helper utility returns OK , authentication succeeded. 3.3. Setting up Squid as a caching proxy with kerberos authentication You can configure Squid as a caching proxy that authenticates users to an Active Directory (AD) using Kerberos. The procedure configures that only authenticated users can use the proxy. Prerequisites The procedure assumes that the /etc/squid/squid.conf file is as provided by the squid package. If you edited this file before, remove the file and reinstall the package. The server on which you want to install Squid is a member of the AD domain. Procedure Install the following packages: Authenticate as the AD domain administrator: Create a keytab for Squid, store it in the /etc/squid/HTTP.keytab file and add the HTTP service principal to the keytab: Optional: If system is initially joined to the AD domain with realm (via adcli ), use following instructions to add HTTP principal and create a keytab file for squid: Add the HTTP service principal to the default keytab file /etc/krb5.keytab and verify: Load the /etc/krb5.keytab file, remove all service principals except HTTP , and save the remaining principals into the /etc/squid/HTTP.keytab file: In the interactive shell of ktutil , you can use the different options, until all unwanted principals are removed from keytab, for example: Warning The keys in /etc/krb5.keytab might get updated if SSSD or Samba/winbind will update the machine account password. After the update, the key in /etc/squid/HTTP.keytab will stop working, and you will need to perform the ktutil steps again to copy the new keys into the keytab. Set the owner of the keytab file to the squid user: Optional: Verify that the keytab file contains the HTTP service principal for the fully-qualified domain name (FQDN) of the proxy server: Edit the /etc/squid/squid.conf file: To configure the negotiate_kerberos_auth helper utility, add the following configuration entry to the top of /etc/squid/squid.conf : The following describes the parameters passed to the negotiate_kerberos_auth helper utility in the example above: -k file sets the path to the key tab file. Note that the squid user must have read permissions on this file. -s HTTP/ host_name @ kerberos_realm sets the Kerberos principal that Squid uses. Optionally, you can enable logging by passing one or both of the following parameters to the helper utility: -i logs informational messages, such as the authenticating user. -d enables debug logging. Squid logs the debugging information from the helper utility to the /var/log/squid/cache.log file. Add the following ACL and rule to configure that Squid allows only authenticated users to use the proxy: Important Specify these settings before the http_access deny all rule. Remove the following rule to disable bypassing the proxy authentication from IP ranges specified in localnet ACLs: The following ACL exists in the default configuration and defines 443 as a port that uses the HTTPS protocol: If users should be able to use the HTTPS protocol also on other ports, add an ACL for each of these port: Update the list of acl Safe_ports rules to configure to which ports Squid can establish a connection. For example, to configure that clients using the proxy can only access resources on port 21 (FTP), 80 (HTTP), and 443 (HTTPS), keep only the following acl Safe_ports statements in the configuration: By default, the configuration contains the http_access deny !Safe_ports rule that defines access denial to ports that are not defined in Safe_ports ACLs. Configure the cache type, the path to the cache directory, the cache size, and further cache type-specific settings in the cache_dir parameter: With these settings: Squid uses the ufs cache type. Squid stores its cache in the /var/spool/squid/ directory. The cache grows up to 10000 MB. Squid creates 16 level-1 sub-directories in the /var/spool/squid/ directory. Squid creates 256 sub-directories in each level-1 directory. If you do not set a cache_dir directive, Squid stores the cache in memory. If you set a different cache directory than /var/spool/squid/ in the cache_dir parameter: Create the cache directory: Configure the permissions for the cache directory: If you run SELinux in enforcing mode, set the squid_cache_t context for the cache directory: If the semanage utility is not available on your system, install the policycoreutils-python-utils package. Open the 3128 port in the firewall: Enable and start the squid service: Verification To verify that the proxy works correctly, download a web page using the curl utility: If curl does not display any error and the index.html file exists in the current directory, the proxy works. Troubleshooting steps Obtain a Kerberos ticket for the AD account: Optional: Display the ticket: Use the negotiate_kerberos_auth_test utility to test the authentication: If the helper utility returns a token, the authentication succeeded: 3.4. Configuring a domain deny list in Squid Frequently, administrators want to block access to specific domains. This section describes how to configure a domain deny list in Squid. Prerequisites Squid is configured, and users can use the proxy. Procedure Edit the /etc/squid/squid.conf file and add the following settings: Important Add these entries before the first http_access allow statement that allows access to users or clients. Create the /etc/squid/domain_deny_list.txt file and add the domains you want to block. For example, to block access to example.com including subdomains and to block example.net , add: Important If you referred to the /etc/squid/domain_deny_list.txt file in the squid configuration, this file must not be empty. If the file is empty, Squid fails to start. Restart the squid service: 3.5. Configuring the Squid service to listen on a specific port or IP address By default, the Squid proxy service listens on the 3128 port on all network interfaces. You can change the port and configuring Squid to listen on a specific IP address. Prerequisites The squid package is installed. Procedure Edit the /etc/squid/squid.conf file: To set the port on which the Squid service listens, set the port number in the http_port parameter. For example, to set the port to 8080 , set: To configure on which IP address the Squid service listens, set the IP address and port number in the http_port parameter. For example, to configure that Squid listens only on the 192.0.2.1 IP address on port 3128 , set: Add multiple http_port parameters to the configuration file to configure that Squid listens on multiple ports and IP addresses: If you configured that Squid uses a different port as the default ( 3128 ): Open the port in the firewall: If you run SELinux in enforcing mode, assign the port to the squid_port_t port type definition: If the semanage utility is not available on your system, install the policycoreutils-python-utils package. Restart the squid service: 3.6. Additional resources Configuration parameters usr/share/doc/squid-<version>/squid.conf.documented
|
[
"dnf install squid",
"acl localnet src 192.0.2.0/24 acl localnet 2001:db8:1::/64",
"acl SSL_ports port 443",
"acl SSL_ports port port_number",
"acl Safe_ports port 21 acl Safe_ports port 80 acl Safe_ports port 443",
"cache_dir ufs /var/spool/squid 10000 16 256",
"mkdir -p path_to_cache_directory",
"chown squid:squid path_to_cache_directory",
"semanage fcontext -a -t squid_cache_t \" path_to_cache_directory (/.*)?\" restorecon -Rv path_to_cache_directory",
"firewall-cmd --permanent --add-port=3128/tcp firewall-cmd --reload",
"systemctl enable --now squid",
"curl -O -L \" https://www.redhat.com/index.html \" -x \" proxy.example.com:3128 \"",
"dnf install squid",
"auth_param basic program /usr/lib64/squid/basic_ldap_auth -b \" cn=users,cn=accounts,dc=example,dc=com \" -D \" uid=proxy_user,cn=users,cn=accounts,dc=example,dc=com \" -W /etc/squid/ldap_password -f \" (&(objectClass=person)(uid=%s)) \" -ZZ -H ldap://ldap_server.example.com:389",
"acl ldap-auth proxy_auth REQUIRED http_access allow ldap-auth",
"http_access allow localnet",
"acl SSL_ports port 443",
"acl SSL_ports port port_number",
"acl Safe_ports port 21 acl Safe_ports port 80 acl Safe_ports port 443",
"cache_dir ufs /var/spool/squid 10000 16 256",
"mkdir -p path_to_cache_directory",
"chown squid:squid path_to_cache_directory",
"semanage fcontext -a -t squid_cache_t \" path_to_cache_directory (/.*)?\" restorecon -Rv path_to_cache_directory",
"echo \" password \" > /etc/squid/ldap_password chown root:squid /etc/squid/ldap_password chmod 640 /etc/squid/ldap_password",
"firewall-cmd --permanent --add-port=3128/tcp firewall-cmd --reload",
"systemctl enable --now squid",
"curl -O -L \" https://www.redhat.com/index.html \" -x \" user_name:[email protected]:3128 \"",
"/usr/lib64/squid/basic_ldap_auth -b \" cn=users,cn=accounts,dc=example,dc=com \" -D \" uid=proxy_user,cn=users,cn=accounts,dc=example,dc=com \" -W /etc/squid/ldap_password -f \" (&(objectClass=person)(uid=%s)) \" -ZZ -H ldap://ldap_server.example.com:389",
"user_name password",
"dnf install squid krb5-workstation",
"kinit administrator@ AD.EXAMPLE.COM",
"export KRB5_KTNAME=FILE:/etc/squid/HTTP.keytab net ads keytab CREATE -U administrator net ads keytab ADD HTTP -U administrator",
"adcli update -vvv --domain=ad.example.com --computer-name=PROXY --add-service-principal=\"HTTP/proxy.ad.example.com\" -C klist -kte /etc/krb5.keytab | grep -i HTTP",
"ktutil ktutil: rkt /etc/krb5.keytab ktutil: l -e slot | KVNO | Principal ----------------------------------------------------------------------------- 1 | 2 | [email protected] (aes128-cts-hmac-sha1-96) 2 | 2 | [email protected] (aes256-cts-hmac-sha1-96) 3 | 2 | host/[email protected] (aes128-cts-hmac-sha1-96) 4 | 2 | host/[email protected] (aes256-cts-hmac-sha1-96) 5 | 2 | host/[email protected] (aes128-cts-hmac-sha1-96) 6 | 2 | host/[email protected] (aes256-cts-hmac-sha1-96) 7 | 2 | HTTP/[email protected] (aes128-cts-hmac-sha1-96) 8 | 2 | HTTP/[email protected] (aes256-cts-hmac-sha1-96)",
"ktutil: delent 1",
"ktutil: l -e slot | KVNO | Principal ------------------------------------------------------------------------------- 1 | 2 | HTTP/[email protected] (aes128-cts-hmac-sha1-96) 2 | 2 | HTTP/[email protected] (aes256-cts-hmac-sha1-96) ktutil: wkt /etc/squid/HTTP.keytab ktutil: q",
"chown squid /etc/squid/HTTP.keytab",
"klist -k /etc/squid/HTTP.keytab Keytab name: FILE:/etc/squid/HTTP.keytab KVNO Principal ---- --------------------------------------------------- 2 HTTP/[email protected]",
"auth_param negotiate program /usr/lib64/squid/negotiate_kerberos_auth -k /etc/squid/HTTP.keytab -s HTTP/ proxy.ad.example.com @ AD.EXAMPLE.COM",
"acl kerb-auth proxy_auth REQUIRED http_access allow kerb-auth",
"http_access allow localnet",
"acl SSL_ports port 443",
"acl SSL_ports port port_number",
"acl Safe_ports port 21 acl Safe_ports port 80 acl Safe_ports port 443",
"cache_dir ufs /var/spool/squid 10000 16 256",
"mkdir -p path_to_cache_directory",
"chown squid:squid path_to_cache_directory",
"semanage fcontext -a -t squid_cache_t \" path_to_cache_directory (/.*)?\" restorecon -Rv path_to_cache_directory",
"firewall-cmd --permanent --add-port=3128/tcp firewall-cmd --reload",
"systemctl enable --now squid",
"curl -O -L \" https://www.redhat.com/index.html \" --proxy-negotiate -u : -x \" proxy.ad.example.com:3128 \"",
"kinit user@ AD.EXAMPLE.COM",
"klist",
"/usr/lib64/squid/negotiate_kerberos_auth_test proxy.ad.example.com",
"Token: YIIFtAYGKwYBBQUCoIIFqDC",
"acl domain_deny_list dstdomain \"/etc/squid/domain_deny_list.txt\" http_access deny all domain_deny_list",
".example.com example.net",
"systemctl restart squid",
"http_port 8080",
"http_port 192.0.2.1:3128",
"http_port 192.0.2.1:3128 http_port 192.0.2.1:8080",
"firewall-cmd --permanent --add-port= port_number /tcp firewall-cmd --reload",
"semanage port -a -t squid_port_t -p tcp port_number",
"systemctl restart squid"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/deploying_web_servers_and_reverse_proxies/configuring-the-squid-caching-proxy-server_deploying-web-servers-and-reverse-proxies
|
Preface
|
Preface Providing feedback on Red Hat build of Apache Camel documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create ticket Enter a brief description of the issue in the Summary. Provide a detailed description of the issue or enhancement in the Description. Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team.
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/release_notes_for_red_hat_build_of_apache_camel_for_quarkus/pr01
|
Chapter 15. ConfigService
|
Chapter 15. ConfigService 15.1. GetConfig GET /v1/config 15.1.1. Description 15.1.2. Parameters 15.1.3. Return Type StorageConfig 15.1.4. Content Type application/json 15.1.5. Responses Table 15.1. HTTP Response Codes Code Message Datatype 200 A successful response. StorageConfig 0 An unexpected error response. GooglerpcStatus 15.1.6. Samples 15.1.7. Common object reference 15.1.7.1. BannerConfigSize Enum Values UNSET SMALL MEDIUM LARGE 15.1.7.2. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 15.1.7.3. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 15.1.7.3.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 15.1.7.4. StorageAdministrationEventsConfig Field Name Required Nullable Type Description Format retentionDurationDays Long int64 15.1.7.5. StorageAlertRetentionConfig Field Name Required Nullable Type Description Format resolvedDeployRetentionDurationDays Integer int32 deletedRuntimeRetentionDurationDays Integer This runtime alert retention configuration takes precedence after allRuntimeRetentionDurationDays . int32 allRuntimeRetentionDurationDays Integer This runtime alert retention configuration has highest precedence. All runtime alerts, including attempted alerts and deleted deployment alerts, are deleted even if respective retention is longer. int32 attemptedDeployRetentionDurationDays Integer int32 attemptedRuntimeRetentionDurationDays Integer This runtime alert retention configuration has lowest precedence. int32 15.1.7.6. StorageBannerConfig Field Name Required Nullable Type Description Format enabled Boolean text String size BannerConfigSize UNSET, SMALL, MEDIUM, LARGE, color String backgroundColor String 15.1.7.7. StorageConfig Field Name Required Nullable Type Description Format publicConfig StoragePublicConfig privateConfig StoragePrivateConfig 15.1.7.8. StorageDayOption Field Name Required Nullable Type Description Format numDays Long int64 enabled Boolean 15.1.7.9. StorageDecommissionedClusterRetentionConfig Field Name Required Nullable Type Description Format retentionDurationDays Integer int32 ignoreClusterLabels Map of string lastUpdated Date date-time createdAt Date date-time 15.1.7.10. StorageLoginNotice Field Name Required Nullable Type Description Format enabled Boolean text String 15.1.7.11. StoragePrivateConfig Field Name Required Nullable Type Description Format DEPRECATEDAlertRetentionDurationDays Integer int32 alertConfig StorageAlertRetentionConfig imageRetentionDurationDays Integer int32 expiredVulnReqRetentionDurationDays Integer int32 decommissionedClusterRetention StorageDecommissionedClusterRetentionConfig reportRetentionConfig StorageReportRetentionConfig vulnerabilityExceptionConfig StorageVulnerabilityExceptionConfig administrationEventsConfig StorageAdministrationEventsConfig 15.1.7.12. StoragePublicConfig Field Name Required Nullable Type Description Format loginNotice StorageLoginNotice header StorageBannerConfig footer StorageBannerConfig telemetry StorageTelemetryConfiguration 15.1.7.13. StorageReportRetentionConfig Field Name Required Nullable Type Description Format historyRetentionDurationDays Long int64 downloadableReportRetentionDays Long int64 downloadableReportGlobalRetentionBytes Long int64 15.1.7.14. StorageTelemetryConfiguration Field Name Required Nullable Type Description Format enabled Boolean lastSetTime Date date-time 15.1.7.15. StorageVulnerabilityExceptionConfig Field Name Required Nullable Type Description Format expiryOptions StorageVulnerabilityExceptionConfigExpiryOptions 15.1.7.16. StorageVulnerabilityExceptionConfigExpiryOptions Field Name Required Nullable Type Description Format dayOptions List of StorageDayOption fixableCveOptions StorageVulnerabilityExceptionConfigFixableCVEOptions customDate Boolean indefinite Boolean 15.1.7.17. StorageVulnerabilityExceptionConfigFixableCVEOptions Field Name Required Nullable Type Description Format allFixable Boolean anyFixable Boolean 15.2. GetVulnerabilityExceptionConfig GET /v1/config/private/exception/vulnerabilities 15.2.1. Description 15.2.2. Parameters 15.2.3. Return Type V1GetVulnerabilityExceptionConfigResponse 15.2.4. Content Type application/json 15.2.5. Responses Table 15.2. HTTP Response Codes Code Message Datatype 200 A successful response. V1GetVulnerabilityExceptionConfigResponse 0 An unexpected error response. GooglerpcStatus 15.2.6. Samples 15.2.7. Common object reference 15.2.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 15.2.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 15.2.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 15.2.7.3. V1DayOption Field Name Required Nullable Type Description Format numDays Long int64 enabled Boolean 15.2.7.4. V1GetVulnerabilityExceptionConfigResponse Field Name Required Nullable Type Description Format config V1VulnerabilityExceptionConfig 15.2.7.5. V1VulnerabilityExceptionConfig Field Name Required Nullable Type Description Format expiryOptions V1VulnerabilityExceptionConfigExpiryOptions 15.2.7.6. V1VulnerabilityExceptionConfigExpiryOptions Field Name Required Nullable Type Description Format dayOptions List of V1DayOption This allows users to set expiry interval based on number of days. fixableCveOptions V1VulnerabilityExceptionConfigFixableCVEOptions customDate Boolean This option, if true, allows UI to show a custom date picker for setting expiry date. indefinite Boolean 15.2.7.7. V1VulnerabilityExceptionConfigFixableCVEOptions Field Name Required Nullable Type Description Format allFixable Boolean This options allows users to expire the vulnerability deferral request if and only if all vulnerabilities in the requests become fixable. anyFixable Boolean This options allows users to expire the vulnerability deferral request if any vulnerability in the requests become fixable. 15.3. UpdateVulnerabilityExceptionConfig PUT /v1/config/private/exception/vulnerabilities 15.3.1. Description 15.3.2. Parameters 15.3.2.1. Body Parameter Name Description Required Default Pattern body V1UpdateVulnerabilityExceptionConfigRequest X 15.3.3. Return Type V1UpdateVulnerabilityExceptionConfigResponse 15.3.4. Content Type application/json 15.3.5. Responses Table 15.3. HTTP Response Codes Code Message Datatype 200 A successful response. V1UpdateVulnerabilityExceptionConfigResponse 0 An unexpected error response. GooglerpcStatus 15.3.6. Samples 15.3.7. Common object reference 15.3.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 15.3.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 15.3.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 15.3.7.3. V1DayOption Field Name Required Nullable Type Description Format numDays Long int64 enabled Boolean 15.3.7.4. V1UpdateVulnerabilityExceptionConfigRequest Field Name Required Nullable Type Description Format config V1VulnerabilityExceptionConfig 15.3.7.5. V1UpdateVulnerabilityExceptionConfigResponse Field Name Required Nullable Type Description Format config V1VulnerabilityExceptionConfig 15.3.7.6. V1VulnerabilityExceptionConfig Field Name Required Nullable Type Description Format expiryOptions V1VulnerabilityExceptionConfigExpiryOptions 15.3.7.7. V1VulnerabilityExceptionConfigExpiryOptions Field Name Required Nullable Type Description Format dayOptions List of V1DayOption This allows users to set expiry interval based on number of days. fixableCveOptions V1VulnerabilityExceptionConfigFixableCVEOptions customDate Boolean This option, if true, allows UI to show a custom date picker for setting expiry date. indefinite Boolean 15.3.7.8. V1VulnerabilityExceptionConfigFixableCVEOptions Field Name Required Nullable Type Description Format allFixable Boolean This options allows users to expire the vulnerability deferral request if and only if all vulnerabilities in the requests become fixable. anyFixable Boolean This options allows users to expire the vulnerability deferral request if any vulnerability in the requests become fixable. 15.4. GetPrivateConfig GET /v1/config/private 15.4.1. Description 15.4.2. Parameters 15.4.3. Return Type StoragePrivateConfig 15.4.4. Content Type application/json 15.4.5. Responses Table 15.4. HTTP Response Codes Code Message Datatype 200 A successful response. StoragePrivateConfig 0 An unexpected error response. GooglerpcStatus 15.4.6. Samples 15.4.7. Common object reference 15.4.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 15.4.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 15.4.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 15.4.7.3. StorageAdministrationEventsConfig Field Name Required Nullable Type Description Format retentionDurationDays Long int64 15.4.7.4. StorageAlertRetentionConfig Field Name Required Nullable Type Description Format resolvedDeployRetentionDurationDays Integer int32 deletedRuntimeRetentionDurationDays Integer This runtime alert retention configuration takes precedence after allRuntimeRetentionDurationDays . int32 allRuntimeRetentionDurationDays Integer This runtime alert retention configuration has highest precedence. All runtime alerts, including attempted alerts and deleted deployment alerts, are deleted even if respective retention is longer. int32 attemptedDeployRetentionDurationDays Integer int32 attemptedRuntimeRetentionDurationDays Integer This runtime alert retention configuration has lowest precedence. int32 15.4.7.5. StorageDayOption Field Name Required Nullable Type Description Format numDays Long int64 enabled Boolean 15.4.7.6. StorageDecommissionedClusterRetentionConfig Field Name Required Nullable Type Description Format retentionDurationDays Integer int32 ignoreClusterLabels Map of string lastUpdated Date date-time createdAt Date date-time 15.4.7.7. StoragePrivateConfig Field Name Required Nullable Type Description Format DEPRECATEDAlertRetentionDurationDays Integer int32 alertConfig StorageAlertRetentionConfig imageRetentionDurationDays Integer int32 expiredVulnReqRetentionDurationDays Integer int32 decommissionedClusterRetention StorageDecommissionedClusterRetentionConfig reportRetentionConfig StorageReportRetentionConfig vulnerabilityExceptionConfig StorageVulnerabilityExceptionConfig administrationEventsConfig StorageAdministrationEventsConfig 15.4.7.8. StorageReportRetentionConfig Field Name Required Nullable Type Description Format historyRetentionDurationDays Long int64 downloadableReportRetentionDays Long int64 downloadableReportGlobalRetentionBytes Long int64 15.4.7.9. StorageVulnerabilityExceptionConfig Field Name Required Nullable Type Description Format expiryOptions StorageVulnerabilityExceptionConfigExpiryOptions 15.4.7.10. StorageVulnerabilityExceptionConfigExpiryOptions Field Name Required Nullable Type Description Format dayOptions List of StorageDayOption fixableCveOptions StorageVulnerabilityExceptionConfigFixableCVEOptions customDate Boolean indefinite Boolean 15.4.7.11. StorageVulnerabilityExceptionConfigFixableCVEOptions Field Name Required Nullable Type Description Format allFixable Boolean anyFixable Boolean 15.5. GetPublicConfig GET /v1/config/public 15.5.1. Description 15.5.2. Parameters 15.5.3. Return Type StoragePublicConfig 15.5.4. Content Type application/json 15.5.5. Responses Table 15.5. HTTP Response Codes Code Message Datatype 200 A successful response. StoragePublicConfig 0 An unexpected error response. GooglerpcStatus 15.5.6. Samples 15.5.7. Common object reference 15.5.7.1. BannerConfigSize Enum Values UNSET SMALL MEDIUM LARGE 15.5.7.2. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 15.5.7.3. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 15.5.7.3.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 15.5.7.4. StorageBannerConfig Field Name Required Nullable Type Description Format enabled Boolean text String size BannerConfigSize UNSET, SMALL, MEDIUM, LARGE, color String backgroundColor String 15.5.7.5. StorageLoginNotice Field Name Required Nullable Type Description Format enabled Boolean text String 15.5.7.6. StoragePublicConfig Field Name Required Nullable Type Description Format loginNotice StorageLoginNotice header StorageBannerConfig footer StorageBannerConfig telemetry StorageTelemetryConfiguration 15.5.7.7. StorageTelemetryConfiguration Field Name Required Nullable Type Description Format enabled Boolean lastSetTime Date date-time 15.6. PutConfig PUT /v1/config 15.6.1. Description 15.6.2. Parameters 15.6.2.1. Body Parameter Name Description Required Default Pattern body V1PutConfigRequest X 15.6.3. Return Type StorageConfig 15.6.4. Content Type application/json 15.6.5. Responses Table 15.6. HTTP Response Codes Code Message Datatype 200 A successful response. StorageConfig 0 An unexpected error response. GooglerpcStatus 15.6.6. Samples 15.6.7. Common object reference 15.6.7.1. BannerConfigSize Enum Values UNSET SMALL MEDIUM LARGE 15.6.7.2. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 15.6.7.3. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 15.6.7.3.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 15.6.7.4. StorageAdministrationEventsConfig Field Name Required Nullable Type Description Format retentionDurationDays Long int64 15.6.7.5. StorageAlertRetentionConfig Field Name Required Nullable Type Description Format resolvedDeployRetentionDurationDays Integer int32 deletedRuntimeRetentionDurationDays Integer This runtime alert retention configuration takes precedence after allRuntimeRetentionDurationDays . int32 allRuntimeRetentionDurationDays Integer This runtime alert retention configuration has highest precedence. All runtime alerts, including attempted alerts and deleted deployment alerts, are deleted even if respective retention is longer. int32 attemptedDeployRetentionDurationDays Integer int32 attemptedRuntimeRetentionDurationDays Integer This runtime alert retention configuration has lowest precedence. int32 15.6.7.6. StorageBannerConfig Field Name Required Nullable Type Description Format enabled Boolean text String size BannerConfigSize UNSET, SMALL, MEDIUM, LARGE, color String backgroundColor String 15.6.7.7. StorageConfig Field Name Required Nullable Type Description Format publicConfig StoragePublicConfig privateConfig StoragePrivateConfig 15.6.7.8. StorageDayOption Field Name Required Nullable Type Description Format numDays Long int64 enabled Boolean 15.6.7.9. StorageDecommissionedClusterRetentionConfig Field Name Required Nullable Type Description Format retentionDurationDays Integer int32 ignoreClusterLabels Map of string lastUpdated Date date-time createdAt Date date-time 15.6.7.10. StorageLoginNotice Field Name Required Nullable Type Description Format enabled Boolean text String 15.6.7.11. StoragePrivateConfig Field Name Required Nullable Type Description Format DEPRECATEDAlertRetentionDurationDays Integer int32 alertConfig StorageAlertRetentionConfig imageRetentionDurationDays Integer int32 expiredVulnReqRetentionDurationDays Integer int32 decommissionedClusterRetention StorageDecommissionedClusterRetentionConfig reportRetentionConfig StorageReportRetentionConfig vulnerabilityExceptionConfig StorageVulnerabilityExceptionConfig administrationEventsConfig StorageAdministrationEventsConfig 15.6.7.12. StoragePublicConfig Field Name Required Nullable Type Description Format loginNotice StorageLoginNotice header StorageBannerConfig footer StorageBannerConfig telemetry StorageTelemetryConfiguration 15.6.7.13. StorageReportRetentionConfig Field Name Required Nullable Type Description Format historyRetentionDurationDays Long int64 downloadableReportRetentionDays Long int64 downloadableReportGlobalRetentionBytes Long int64 15.6.7.14. StorageTelemetryConfiguration Field Name Required Nullable Type Description Format enabled Boolean lastSetTime Date date-time 15.6.7.15. StorageVulnerabilityExceptionConfig Field Name Required Nullable Type Description Format expiryOptions StorageVulnerabilityExceptionConfigExpiryOptions 15.6.7.16. StorageVulnerabilityExceptionConfigExpiryOptions Field Name Required Nullable Type Description Format dayOptions List of StorageDayOption fixableCveOptions StorageVulnerabilityExceptionConfigFixableCVEOptions customDate Boolean indefinite Boolean 15.6.7.17. StorageVulnerabilityExceptionConfigFixableCVEOptions Field Name Required Nullable Type Description Format allFixable Boolean anyFixable Boolean 15.6.7.18. V1PutConfigRequest Field Name Required Nullable Type Description Format config StorageConfig
|
[
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"next available tag: 5",
"next available tag:9",
"next available tag: 4",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"next available tag: 5",
"next available tag:9",
"next available tag: 4",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"next available tag: 5",
"next available tag:9",
"next available tag: 4"
] |
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/api_reference/configservice
|
A.3. Capturing Trace Data on a Constant Basis Using the Systemtap Flight Recorder
|
A.3. Capturing Trace Data on a Constant Basis Using the Systemtap Flight Recorder You can capture QEMU trace data all the time using a systemtap initscript provided in the qemu-kvm package. This package uses SystemTap's flight recorder mode to trace all running guest virtual machines and to save the results to a fixed-size buffer on the host. Old trace entries are overwritten by new entries when the buffer is filled. Procedure A.1. Configuring and running systemtap Install the package Install the systemtap-initscript package by running the following command: Copy the configuration file Copy the systemtap scripts and the configuration files to the systemtap directory by running the following commands: The set of trace events to enable is given in qemu_kvm.stp. This SystemTap script can be customized to add or remove trace events provided in /usr/share/systemtap/tapset/qemu-kvm-simpletrace.stp . SystemTap customizations can be made to qemu_kvm.conf to control the flight recorder buffer size and whether to store traces in memory only or in the disk as well. Start the service Start the systemtap service by running the following command: Make systemtap enabled to run at boot time Enable the systemtap service to run at boot time by running the following command: Confirmation the service is running Confirm that the service is working by running the following command: Procedure A.2. Inspecting the trace buffer Create a trace buffer dump file Create a trace buffer dump file called trace.log and place it in the tmp directory by running the following command: You can change the file name and location to something else. Start the service As the step stops the service, start it again by running the following command: Convert the trace contents into a readable format To convert the trace file contents into a more readable format, enter the following command: Note The following notes and limitations should be noted: The systemtap service is disabled by default. There is a small performance penalty when this service is enabled, but it depends on which events are enabled in total. There is a README file located in /usr/share/doc/qemu-kvm-*/README.systemtap .
|
[
"yum install systemtap-initscript",
"cp /usr/share/qemu-kvm/systemtap/script.d/qemu_kvm.stp /etc/systemtap/script.d/ cp /usr/share/qemu-kvm/systemtap/conf.d/qemu_kvm.conf /etc/systemtap/conf.d/",
"systemctl start systemtap qemu_kvm",
"systemctl enable systemtap qemu_kvm",
"systemctl status systemtap qemu_kvm qemu_kvm is running",
"staprun -A qemu_kvm >/tmp/trace.log",
"systemctl start systemtap qemu_kvm",
"/usr/share/qemu-kvm/simpletrace.py --no-header /usr/share/qemu-kvm/trace-events /tmp/trace.log"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-troubleshooting-systemtaptrace
|
Preface
|
Preface Open Java Development Kit (OpenJDK) is a free and open source implementation of the Java Platform, Standard Edition (Java SE). Eclipse Temurin is available in four LTS versions: OpenJDK 8u, OpenJDK 11u, OpenJDK 17u, and OpenJDK 21u. Binary files for Eclipse Temurin are available for macOS, Microsoft Windows, and multiple Linux x86 Operating Systems including Red Hat Enterprise Linux and Ubuntu.
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_eclipse_temurin_11.0.25/pr01
|
4.4. Packages Required to Install a Replica
|
4.4. Packages Required to Install a Replica Replica package requirements are the same as server package requirements. See Section 2.2, "Packages Required to Install an IdM Server" .
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/replica-required-packages
|
Chapter 1. Preparing to deploy OpenShift Data Foundation
|
Chapter 1. Preparing to deploy OpenShift Data Foundation When you deploy OpenShift Data Foundation on OpenShift Container Platform using local storage devices, you can create internal cluster resources. This approach internally provisions base services. Then all applications can access additional storage classes. Before you begin the deployment of Red Hat OpenShift Data Foundation using local storage, ensure that your resource requirements are met. See requirements for installing OpenShift Data Foundation using local storage devices . On the external key management system (KMS), Ensure that a policy with a token exists and the key value backend path in Vault is enabled. See enabling key value backend path and policy in vault . Ensure that you are using signed certificates on your Vault servers. After you have addressed the above, follow these steps in the order given: Install the Red Hat OpenShift Data Foundation Operator . Install Local Storage Operator . Find the available storage devices . Create the OpenShift Data Foundation cluster service on IBM Z . 1.1. Requirements for installing OpenShift Data Foundation using local storage devices Node requirements The cluster must consist of at least three OpenShift Container Platform worker nodes with locally attached-storage devices on each of them. Each of the three selected nodes must have at least one raw block device available to be used by OpenShift Data Foundation. The devices you use must be empty; the disks must not include physical volumes (PVs), volume groups (VGs), or logical volumes (LVs) remaining on the disk. For more information, see the Resource requirements section in the Planning guide. For storage nodes, FCP storage devices are required. 1.2. Enabling key value backend path and policy in Vault Prerequisites Administrator access to Vault. Carefully, choose a unique path name as the backend path that follows the naming convention since it cannot be changed later. Procedure Enable the Key/Value (KV) backend path in Vault. For Vault KV secret engine API, version 1: For Vault KV secret engine API, version 2: Create a policy to restrict users to perform a write or delete operation on the secret using the following commands. Create a token matching the above policy.
|
[
"vault secrets enable -path=odf kv",
"vault secrets enable -path=odf kv-v2",
"echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -",
"vault token create -policy=odf -format json"
] |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/deploying_openshift_data_foundation_using_ibm_z_infrastructure/preparing_to_deploy_openshift_data_foundation
|
8.93. ipmitool
|
8.93. ipmitool 8.93.1. RHBA-2014:1624 - ipmitool bug fix update Updated ipmitool packages that fix one bug are now available for Red Hat Enterprise Linux 6. The ipmitool packages contain a command line utility for interfacing with devices that support the Intelligent Platform Management Interface (IPMI) specification. IPMI is an open standard for machine health, inventory, and remote power control. Bug Fix BZ# 1147593 Previously, the ipmitool default timeout values signified a time period which was too short. As a consequence, during retries, the ipmitool utility could terminate unexpectedly with a segmentation fault, or could produce a nonsensical error message. With this update, the ipmitool options passed from environment variable are parsed correctly from IPMITOOL_OPTS and IPMI_OPTS variables, IPMITOOL_* taking precedence over IPMI_* variables. As a result, ipmitool no longer crashes in the described situation. Users of ipmitool are advised to upgrade to these updated packages, which fix this bug. After installing this update, the IPMI event daemon (ipmievd) will be restarted automatically. 8.93.2. RHBA-2014:1567 - ipmitool bug fix and enhancement update Updated ipmitool packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The ipmitool packages contain a command line utility for interfacing with devices that support the Intelligent Platform Management Interface (IPMI) specification. IPMI is an open standard for machine health, inventory, and remote power control. This update also fixes the following bug: Note The ipmitool utility has been upgraded to upstream version 1.8.14, which provides a number of bug fixes and enhancements over the version. (BZ# 825194 ) This update also fixes the following bug: Bug Fix BZ# 1029529 The IPMI kernel code was missing aliases for the IPMI kernel modules. Consequently, not all of the IPMI kernel modules could be automatically loaded when the appropriate hardware was detected, and the hardware thus could not be used. To fix this problem, the module alias configuration file has been added to the /etc/modprobe.d/ directory, linking all of the separate IPMI modules to the IPI* device class alias. Note that the system must be rebooted for this change to take effect. The ipmitool utility has been upgraded to upstream version 1.8.14, which provides a number of bug fixes and enhancements over the version. (BZ#825194) In addition, this update adds the following Enhancement BZ# 1056581 To improve usage of ipmitool as a part of the software stack, certain environment variables were unified and several new variables were introduced to ipmitool, specifically: IPMITOOL_* variables now take precedence over IPMI_* variables. The IPMITOOL_KGKEY variable has been added to unify the name space usage. * Limited IPv6 support has been added to the ipmitool packages; the IPMI standard does not include the IPv6 data definitions, and therefore this change includes only IPv6 connectivity. The OEM-vendor-specific command values related to IPv6 are beyond the scope of this feature. Users of ipmitool are advised to upgrade to these updated packages, which fix these bugs and add these enhancements. After installing this update, the IPMI event daemon (ipmievd) will be restarted automatically.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/ipmitool
|
B.46. libvpx
|
B.46. libvpx B.46.1. RHSA-2010:0999 - Moderate: libvpx security update Updated libvpx packages that fix one security issue are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having moderate security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. The libvpx packages provide the VP8 SDK, which allows the encoding and decoding of the VP8 video codec, commonly used with the WebM multimedia container file format. CVE-2010-4203 An integer overflow flaw, leading to arbitrary memory writes, was found in libvpx. An attacker could create a specially-crafted video encoded using the VP8 codec that, when played by a victim with an application using libvpx (such as Totem), would cause the application to crash or, potentially, execute arbitrary code. All users of libvpx are advised to upgrade to these updated packages, which contain a backported patch to correct this issue. After installing the update, all applications using libvpx must be restarted for the changes to take effect.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/libvpx
|
Chapter 23. Red Hat JBoss Data Grid CLIs
|
Chapter 23. Red Hat JBoss Data Grid CLIs Red Hat JBoss Data Grid includes two Command Line Interfaces: a Library Mode CLI (see Section 23.1, "Red Hat JBoss Data Grid Library Mode CLI" for details) and a Server Mode CLI (see Section 23.2, "Red Hat Data Grid Server CLI" for details). Report a bug 23.1. Red Hat JBoss Data Grid Library Mode CLI Red Hat JBoss Data Grid includes the Red Hat JBoss Data Grid Library Mode Command Line Interface (CLI) that is used to inspect and modify data within caches and internal components (such as transactions, cross-datacenter replication sites, and rolling upgrades). The JBoss Data Grid Library Mode CLI can also be used for more advanced operations such as transactions. 23149%2C+Administration+and+Configuration+Guide-6.628-06-2017+13%3A51%3A02JBoss+Data+Grid+6Documentation6.6.1 Report a bug 23.1.1. Start the Library Mode CLI (Server) Start the Red Hat JBoss Data Grid CLI's server-side module with the standalone and cluster files. For Linux, use the standlaone.sh or clustered.sh script and for Windows, use the standalone.bat or clustered.bat file. Report a bug 23.1.2. Start the Library Mode CLI (Client) Start the Red Hat JBoss Data Grid CLI client using the cli files in the bin directory. For Linux, run bin/cli.sh and for Windows, run bin\cli.bat . When starting up the CLI client, specific command line switches can be used to customize the start up. Report a bug 23.1.3. CLI Client Switches for the Command Line The listed command line switches are appended to the command line when starting the Red Hat JBoss Data Grid CLI command: Table 23.1. CLI Client Command Line Switches Short Option Long Option Description -c --connect=USD{URL} Connects to a running Red Hat JBoss Data Grid instance. For example, for JMX over RMI use jmx://[username[:password]]@host:port[/container[/cache]] and for JMX over JBoss Remoting use remoting://[username[:password]]@host:port[/container[/cache]] -f --file=USD{FILE} Read the input from the specified file rather than using interactive mode. If the value is set to - then the stdin is used as the input. -h --help Displays the help information. -v --version Displays the CLI version information. Report a bug 23.1.4. Connect to the Application Use the following command to connect to the application using the CLI: Note The port value 12000 depends on the value the JVM is started with. For example, starting the JVM with the -Dcom.sun.management.jmxremote.port=12000 command line parameter uses this port, but otherwise a random port is chosen. When the remoting protocol ( remoting://localhost:9999 ) is used, the Red Hat JBoss Data Grid server administration port is used (the default is port 9999 ). The command line prompt displays the active connection information, including the currently selected CacheManager . Use the cache command to select a cache before performing cache operations. The CLI supports tab completion, therefore using the cache and pressing the tab button displays a list of active caches: Additionally, pressing tab displays a list of valid commands for the CLI. Report a bug
|
[
"[disconnected//]> connect jmx://localhost:12000 [jmx://localhost:12000/MyCacheManager/>",
"[[jmx://localhost:12000/MyCacheManager/> cache ___defaultcache namedCache [jmx://localhost:12000/MyCacheManager/]> cache ___defaultcache [jmx://localhost:12000/MyCacheManager/___defaultcache]>"
] |
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/chap-red_hat_jboss_data_grid_clis
|
4.240. pulseaudio
|
4.240. pulseaudio 4.240.1. RHBA-2012:1066 - pulseaudio bug fix update Updated pulseaudio packages that fix one bug are now available for Red Hat Enterprise Linux 6 Extended Update Support. PulseAudio is a sound server for Linux and other Unix-like operating systems. Bug Fix BZ# 836138 On certain sound card models by Creative Labs, the S/PDIF Optical Raw output was enabled on boot regardless of the settings. This caused the audio output on the analog duplex output to be disabled. With this update, the S/PDIF Optical Raw output is disabled on boot so that the analog output works as expected. All users of pulseaudio are advised to upgrade to these updated packages, which fix this bug.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/pulseaudio
|
7.8. Creating a Bond Connection Using a GUI
|
7.8. Creating a Bond Connection Using a GUI You can use the GNOME control-center utility to direct NetworkManager to create a Bond from two or more Wired or InfiniBand connections. It is not necessary to create the connections to be bonded first. They can be configured as part of the process to configure the bond. You must have the MAC addresses of the interfaces available in order to complete the configuration process. 7.8.1. Establishing a Bond Connection Procedure 7.1. Adding a New Bond Connection_Using nm-connection-editor Follow the below steps to create a new bond connection. Enter nm-connection-editor in a terminal: Click the Add button. The Choose a Connection Type window appears. Select Bond and click Create . The Editing Bond connection 1 window appears. Figure 7.6. The NetworkManager Graphical User Interface Add a Bond menu On the Bond tab, click Add and select the type of interface you want to use with the bond connection. Click the Create button. Note that the dialog to select the port type only comes up when you create the first port; after that, it will automatically use that same type for all further ports. The Editing bond0 slave 1 window appears. Use the Device MAC address drop-down menu to select the MAC address of the interface to be bonded. The first port's MAC address will be used as the MAC address for the bond interface. If required, enter a clone MAC address to be used as the bond's MAC address. Click the Save button. Figure 7.7. The NetworkManager Graphical User Interface Add a Bond Connection menu The name of the bonded port appears in the Bonded connections window. Click the Add button to add further port connections. Review and confirm the settings and then click the Save button. Edit the bond-specific settings by referring to Section 7.8.1.1, "Configuring the Bond Tab" below. Procedure 7.2. Editing an Existing Bond Connection Follow these steps to edit an existing bond connection. Enter nm-connection-editor in a terminal: Select the connection you want to edit and click the Edit button. Select the General tab. Configure the connection name, auto-connect behavior, and availability settings. Five settings in the Editing dialog are common to all connection types, see the General tab: Connection name - Enter a descriptive name for your network connection. This name will be used to list this connection in the menu of the Network window. Automatically connect to this network when it is available - Select this box if you want NetworkManager to auto-connect to this connection when it is available. See the section called "Editing an Existing Connection with control-center" for more information. All users may connect to this network - Select this box to create a connection available to all users on the system. Changing this setting may require root privileges. See Section 3.4.5, "Managing System-wide and Private Connection Profiles with a GUI" for details. Automatically connect to VPN when using this connection - Select this box if you want NetworkManager to auto-connect to a VPN connection when it is available. Select the VPN from the drop-down menu. Firewall Zone - Select the firewall zone from the drop-down menu. See the Red Hat Enterprise Linux 7 Security Guide for more information on firewall zones. Edit the bond-specific settings by referring to Section 7.8.1.1, "Configuring the Bond Tab" below. Saving Your New (or Modified) Connection and Making Further Configurations Once you have finished editing your bond connection, click the Save button to save your customized configuration. Then, to configure: IPv4 settings for the connection, click the IPv4 Settings tab and proceed to Section 5.4, "Configuring IPv4 Settings" or IPv6 settings for the connection, click the IPv6 Settings tab and proceed to Section 5.5, "Configuring IPv6 Settings" . 7.8.1.1. Configuring the Bond Tab If you have already added a new bond connection (see Procedure 7.1, "Adding a New Bond Connection_Using nm-connection-editor" for instructions), you can edit the Bond tab to set the load sharing mode and the type of link monitoring to use to detect failures of a port connection. Mode The mode that is used to share traffic over the port connections which make up the bond. The default is Round-robin . Other load sharing modes, such as 802.3ad , can be selected by means of the drop-down list. Link Monitoring The method of monitoring the ports ability to carry network traffic. The following modes of load sharing are selectable from the Mode drop-down list: Round-robin Sets a round-robin policy for fault tolerance and load balancing. Transmissions are received and sent out sequentially on each bonded port interface beginning with the first one available. This mode might not work behind a bridge with virtual machines without additional switch configuration. Active backup Sets an active-backup policy for fault tolerance. Transmissions are received and sent out through the first available bonded port interface. Another bonded port interface is only used if the active bonded port interface fails. Note that this is the only mode available for bonds of InfiniBand devices. XOR Sets an XOR (exclusive-or) policy. Transmissions are based on the selected hash policy. The default is to derive a hash by XOR of the source and destination MAC addresses multiplied by the modulo of the number of port interfaces. In this mode traffic destined for specific peers will always be sent over the same interface. As the destination is determined by the MAC addresses this method works best for traffic to peers on the same link or local network. If traffic has to pass through a single router then this mode of traffic balancing will be suboptimal. Broadcast Sets a broadcast policy for fault tolerance. All transmissions are sent on all port interfaces. This mode might not work behind a bridge with virtual machines without additional switch configuration. 802.3ad Sets an IEEE 802.3ad dynamic link aggregation policy. Creates aggregation groups that share the same speed and duplex settings. Transmits and receives on all ports in the active aggregator. Requires a network switch that is 802.3ad compliant. Adaptive transmit load balancing Sets an adaptive Transmit Load Balancing (TLB) policy for fault tolerance and load balancing. The outgoing traffic is distributed according to the current load on each port interface. Incoming traffic is received by the current port. If the receiving port fails, another port takes over the MAC address of the failed port. This mode is only suitable for local addresses known to the kernel bonding module and therefore cannot be used behind a bridge with virtual machines. Adaptive load balancing Sets an Adaptive Load Balancing (ALB) policy for fault tolerance and load balancing. Includes transmit and receive load balancing for IPv4 traffic. Receive load balancing is achieved through ARP negotiation. This mode is only suitable for local addresses known to the kernel bonding module and therefore cannot be used behind a bridge with virtual machines. The following types of link monitoring can be selected from the Link Monitoring drop-down list. It is a good idea to test which channel bonding module parameters work best for your bonded interfaces. MII (Media Independent Interface) The state of the carrier wave of the interface is monitored. This can be done by querying the driver, by querying MII registers directly, or by using ethtool to query the device. Three options are available: Monitoring Frequency The time interval, in milliseconds, between querying the driver or MII registers. Link up delay The time in milliseconds to wait before attempting to use a link that has been reported as up. This delay can be used if some gratuitous ARP requests are lost in the period immediately following the link being reported as " up " . This can happen during switch initialization for example. Link down delay The time in milliseconds to wait before changing to another link when a previously active link has been reported as " down " . This delay can be used if an attached switch takes a relatively long time to change to backup mode. ARP The address resolution protocol ( ARP ) is used to probe one or more peers to determine how well the link-layer connections are working. It is dependent on the device driver providing the transmit start time and the last receive time. Two options are available: Monitoring Frequency The time interval, in milliseconds, between sending ARP requests. ARP targets A comma separated list of IP addresses to send ARP requests to.
|
[
"~]USD nm-connection-editor",
"~]USD nm-connection-editor"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/networking_guide/sec-Creating_a_Bond_Connection_Using_a_GUI
|
Chapter 2. Restoring Central database by using the roxctl CLI
|
Chapter 2. Restoring Central database by using the roxctl CLI You can use the roxctl CLI to restore Red Hat Advanced Cluster Security for Kubernetes (RHACS) by using the restore command. This command requires an API token or your administrator password. 2.1. Restoring by using an API token You can restore the entire database of RHACS by using an API token. Prerequisites You have a RHACS backup file. You have an API token with the administrator role. You have installed the roxctl CLI. Procedure Set the ROX_API_TOKEN and the ROX_ENDPOINT environment variables by running the following commands: USD export ROX_API_TOKEN=<api_token> USD export ROX_ENDPOINT=<address>:<port_number> Restore the Central database by running the following command: USD roxctl central db restore <backup_file> 1 1 For <backup_file> , specify the name of the backup file that you want to restore. 2.2. Restoring by using the administrator password You can restore the entire database of RHACS by using your administrator password. Prerequisites You have a RHACS backup file. You have the administrator password. You have installed the roxctl CLI. Procedure Set the ROX_ENDPOINT environment variable by running the following command: USD export ROX_ENDPOINT=<address>:<port_number> Restore the Central database by running the following command: USD roxctl -p <admin_password> \ 1 central db restore <backup_file> 2 1 For <admin_password> , specify the administrator password. 2 For <backup_file> , specify the name of the backup file that you want to restore. 2.3. Resuming the restore operation If your connection is interrupted during a restore operation or you need to go offline, you can resume the restore operation. If you do not have access to the machine running the resume operation, you can use the roxctl central db restore status command to check the status of an ongoing restore operation. If the connection is interrupted, the roxctl CLI automatically attempts to restore a task as soon as the connection is available again. The automatic connection retries depend on the duration specified by the timeout option. Use the --timeout option to specify the time in seconds, minutes or hours after which the roxctl CLI stops trying to resume a restore operation. If the option is not specified, the default timeout is 10 minutes. If a restore operation gets stuck or you want to cancel it, use the roxctl central db restore cancel command to cancel a running restore operation. If a restore operation is stuck, you have canceled it, or the time has expired, you can resume the restore by running the original command again. Important During interruptions, RHACS caches an ongoing restore operation for 24 hours. You can resume this operation by executing the original restore command again. The --timeout option only controls the client-side connection retries and has no effect on the server-side restore cache of 24 hours. You cannot resume restores across Central pod restarts. If a restore operation is interrupted, you must restart it within 24 hours and before restarting Central, otherwise RHACS cancels the restore operation.
|
[
"export ROX_API_TOKEN=<api_token>",
"export ROX_ENDPOINT=<address>:<port_number>",
"roxctl central db restore <backup_file> 1",
"export ROX_ENDPOINT=<address>:<port_number>",
"roxctl -p <admin_password> \\ 1 central db restore <backup_file> 2"
] |
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.5/html/troubleshooting_central/restoring-central-database-by-using-the-roxctl-cli
|
Chapter 20. Configuring Routes
|
Chapter 20. Configuring Routes 20.1. Route configuration 20.1.1. Creating an HTTP-based route A route allows you to host your application at a public URL. It can either be secure or unsecured, depending on the network security configuration of your application. An HTTP-based route is an unsecured route that uses the basic HTTP routing protocol and exposes a service on an unsecured application port. The following procedure describes how to create a simple HTTP-based route to a web application, using the hello-openshift application as an example. Prerequisites You installed the OpenShift CLI ( oc ). You are logged in as an administrator. You have a web application that exposes a port and a TCP endpoint listening for traffic on the port. Procedure Create a project called hello-openshift by running the following command: USD oc new-project hello-openshift Create a pod in the project by running the following command: USD oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/hello-openshift/hello-pod.json Create a service called hello-openshift by running the following command: USD oc expose pod/hello-openshift Create an unsecured route to the hello-openshift application by running the following command: USD oc expose svc hello-openshift If you examine the resulting Route resource, it should look similar to the following: YAML definition of the created unsecured route: apiVersion: route.openshift.io/v1 kind: Route metadata: name: hello-openshift spec: host: hello-openshift-hello-openshift.<Ingress_Domain> 1 port: targetPort: 8080 2 to: kind: Service name: hello-openshift 1 <Ingress_Domain> is the default ingress domain name. The ingresses.config/cluster object is created during the installation and cannot be changed. If you want to specify a different domain, you can specify an alternative cluster domain using the appsDomain option. 2 targetPort is the target port on pods that is selected by the service that this route points to. Note To display your default ingress domain, run the following command: USD oc get ingresses.config/cluster -o jsonpath={.spec.domain} 20.1.2. Configuring route timeouts You can configure the default timeouts for an existing route when you have services in need of a low timeout, which is required for Service Level Availability (SLA) purposes, or a high timeout, for cases with a slow back end. Prerequisites You need a deployed Ingress Controller on a running cluster. Procedure Using the oc annotate command, add the timeout to the route: USD oc annotate route <route_name> \ --overwrite haproxy.router.openshift.io/timeout=<timeout><time_unit> 1 1 Supported time units are microseconds (us), milliseconds (ms), seconds (s), minutes (m), hours (h), or days (d). The following example sets a timeout of two seconds on a route named myroute : USD oc annotate route myroute --overwrite haproxy.router.openshift.io/timeout=2s 20.1.3. HTTP Strict Transport Security HTTP Strict Transport Security (HSTS) policy is a security enhancement, which signals to the browser client that only HTTPS traffic is allowed on the route host. HSTS also optimizes web traffic by signaling HTTPS transport is required, without using HTTP redirects. HSTS is useful for speeding up interactions with websites. When HSTS policy is enforced, HSTS adds a Strict Transport Security header to HTTP and HTTPS responses from the site. You can use the insecureEdgeTerminationPolicy value in a route to redirect HTTP to HTTPS. When HSTS is enforced, the client changes all requests from the HTTP URL to HTTPS before the request is sent, eliminating the need for a redirect. Cluster administrators can configure HSTS to do the following: Enable HSTS per-route Disable HSTS per-route Enforce HSTS per-domain, for a set of domains, or use namespace labels in combination with domains Important HSTS works only with secure routes, either edge-terminated or re-encrypt. The configuration is ineffective on HTTP or passthrough routes. 20.1.3.1. Enabling HTTP Strict Transport Security per-route HTTP strict transport security (HSTS) is implemented in the HAProxy template and applied to edge and re-encrypt routes that have the haproxy.router.openshift.io/hsts_header annotation. Prerequisites You are logged in to the cluster with a user with administrator privileges for the project. You installed the oc CLI. Procedure To enable HSTS on a route, add the haproxy.router.openshift.io/hsts_header value to the edge-terminated or re-encrypt route. You can use the oc annotate tool to do this by running the following command: USD oc annotate route <route_name> -n <namespace> --overwrite=true "haproxy.router.openshift.io/hsts_header"="max-age=31536000;\ 1 includeSubDomains;preload" 1 In this example, the maximum age is set to 31536000 ms, which is approximately eight and a half hours. Note In this example, the equal sign ( = ) is in quotes. This is required to properly execute the annotate command. Example route configured with an annotation apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/hsts_header: max-age=31536000;includeSubDomains;preload 1 2 3 ... spec: host: def.abc.com tls: termination: "reencrypt" ... wildcardPolicy: "Subdomain" 1 Required. max-age measures the length of time, in seconds, that the HSTS policy is in effect. If set to 0 , it negates the policy. 2 Optional. When included, includeSubDomains tells the client that all subdomains of the host must have the same HSTS policy as the host. 3 Optional. When max-age is greater than 0, you can add preload in haproxy.router.openshift.io/hsts_header to allow external services to include this site in their HSTS preload lists. For example, sites such as Google can construct a list of sites that have preload set. Browsers can then use these lists to determine which sites they can communicate with over HTTPS, even before they have interacted with the site. Without preload set, browsers must have interacted with the site over HTTPS, at least once, to get the header. 20.1.3.2. Disabling HTTP Strict Transport Security per-route To disable HTTP strict transport security (HSTS) per-route, you can set the max-age value in the route annotation to 0 . Prerequisites You are logged in to the cluster with a user with administrator privileges for the project. You installed the oc CLI. Procedure To disable HSTS, set the max-age value in the route annotation to 0 , by entering the following command: USD oc annotate route <route_name> -n <namespace> --overwrite=true "haproxy.router.openshift.io/hsts_header"="max-age=0" Tip You can alternatively apply the following YAML to create the config map: Example of disabling HSTS per-route metadata: annotations: haproxy.router.openshift.io/hsts_header: max-age=0 To disable HSTS for every route in a namespace, enter the followinf command: USD oc annotate <route> --all -n <namespace> --overwrite=true "haproxy.router.openshift.io/hsts_header"="max-age=0" Verification To query the annotation for all routes, enter the following command: USD oc get route --all-namespaces -o go-template='{{range .items}}{{if .metadata.annotations}}{{USDa := index .metadata.annotations "haproxy.router.openshift.io/hsts_header"}}{{USDn := .metadata.name}}{{with USDa}}Name: {{USDn}} HSTS: {{USDa}}{{"\n"}}{{else}}{{""}}{{end}}{{end}}{{end}}' Example output Name: routename HSTS: max-age=0 20.1.3.3. Enforcing HTTP Strict Transport Security per-domain To enforce HTTP Strict Transport Security (HSTS) per-domain for secure routes, add a requiredHSTSPolicies record to the Ingress spec to capture the configuration of the HSTS policy. If you configure a requiredHSTSPolicy to enforce HSTS, then any newly created route must be configured with a compliant HSTS policy annotation. Note To handle upgraded clusters with non-compliant HSTS routes, you can update the manifests at the source and apply the updates. Note You cannot use oc expose route or oc create route commands to add a route in a domain that enforces HSTS, because the API for these commands does not accept annotations. Important HSTS cannot be applied to insecure, or non-TLS routes, even if HSTS is requested for all routes globally. Prerequisites You are logged in to the cluster with a user with administrator privileges for the project. You installed the oc CLI. Procedure Edit the Ingress config file: USD oc edit ingresses.config.openshift.io/cluster Example HSTS policy apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: domain: 'hello-openshift-default.apps.username.devcluster.openshift.com' requiredHSTSPolicies: 1 - domainPatterns: 2 - '*hello-openshift-default.apps.username.devcluster.openshift.com' - '*hello-openshift-default2.apps.username.devcluster.openshift.com' namespaceSelector: 3 matchLabels: myPolicy: strict maxAge: 4 smallestMaxAge: 1 largestMaxAge: 31536000 preloadPolicy: RequirePreload 5 includeSubDomainsPolicy: RequireIncludeSubDomains 6 - domainPatterns: 7 - 'abc.example.com' - '*xyz.example.com' namespaceSelector: matchLabels: {} maxAge: {} preloadPolicy: NoOpinion includeSubDomainsPolicy: RequireNoIncludeSubDomains 1 Required. requiredHSTSPolicies are validated in order, and the first matching domainPatterns applies. 2 7 Required. You must specify at least one domainPatterns hostname. Any number of domains can be listed. You can include multiple sections of enforcing options for different domainPatterns . 3 Optional. If you include namespaceSelector , it must match the labels of the project where the routes reside, to enforce the set HSTS policy on the routes. Routes that only match the namespaceSelector and not the domainPatterns are not validated. 4 Required. max-age measures the length of time, in seconds, that the HSTS policy is in effect. This policy setting allows for a smallest and largest max-age to be enforced. The largestMaxAge value must be between 0 and 2147483647 . It can be left unspecified, which means no upper limit is enforced. The smallestMaxAge value must be between 0 and 2147483647 . Enter 0 to disable HSTS for troubleshooting, otherwise enter 1 if you never want HSTS to be disabled. It can be left unspecified, which means no lower limit is enforced. 5 Optional. Including preload in haproxy.router.openshift.io/hsts_header allows external services to include this site in their HSTS preload lists. Browsers can then use these lists to determine which sites they can communicate with over HTTPS, before they have interacted with the site. Without preload set, browsers need to interact at least once with the site to get the header. preload can be set with one of the following: RequirePreload : preload is required by the RequiredHSTSPolicy . RequireNoPreload : preload is forbidden by the RequiredHSTSPolicy . NoOpinion : preload does not matter to the RequiredHSTSPolicy . 6 Optional. includeSubDomainsPolicy can be set with one of the following: RequireIncludeSubDomains : includeSubDomains is required by the RequiredHSTSPolicy . RequireNoIncludeSubDomains : includeSubDomains is forbidden by the RequiredHSTSPolicy . NoOpinion : includeSubDomains does not matter to the RequiredHSTSPolicy . You can apply HSTS to all routes in the cluster or in a particular namespace by entering the oc annotate command . To apply HSTS to all routes in the cluster, enter the oc annotate command . For example: USD oc annotate route --all --all-namespaces --overwrite=true "haproxy.router.openshift.io/hsts_header"="max-age=31536000" To apply HSTS to all routes in a particular namespace, enter the oc annotate command . For example: USD oc annotate route --all -n my-namespace --overwrite=true "haproxy.router.openshift.io/hsts_header"="max-age=31536000" Verification You can review the HSTS policy you configured. For example: To review the maxAge set for required HSTS policies, enter the following command: USD oc get clusteroperator/ingress -n openshift-ingress-operator -o jsonpath='{range .spec.requiredHSTSPolicies[*]}{.spec.requiredHSTSPolicies.maxAgePolicy.largestMaxAge}{"\n"}{end}' To review the HSTS annotations on all routes, enter the following command: USD oc get route --all-namespaces -o go-template='{{range .items}}{{if .metadata.annotations}}{{USDa := index .metadata.annotations "haproxy.router.openshift.io/hsts_header"}}{{USDn := .metadata.name}}{{with USDa}}Name: {{USDn}} HSTS: {{USDa}}{{"\n"}}{{else}}{{""}}{{end}}{{end}}{{end}}' Example output Name: <_routename_> HSTS: max-age=31536000;preload;includeSubDomains 20.1.4. Troubleshooting throughput issues Sometimes applications deployed through OpenShift Container Platform can cause network throughput issues such as unusually high latency between specific services. Use the following methods to analyze performance issues if pod logs do not reveal any cause of the problem: Use a packet analyzer, such as ping or tcpdump to analyze traffic between a pod and its node. For example, run the tcpdump tool on each pod while reproducing the behavior that led to the issue. Review the captures on both sides to compare send and receive timestamps to analyze the latency of traffic to and from a pod. Latency can occur in OpenShift Container Platform if a node interface is overloaded with traffic from other pods, storage devices, or the data plane. USD tcpdump -s 0 -i any -w /tmp/dump.pcap host <podip 1> && host <podip 2> 1 1 podip is the IP address for the pod. Run the oc get pod <pod_name> -o wide command to get the IP address of a pod. tcpdump generates a file at /tmp/dump.pcap containing all traffic between these two pods. Ideally, run the analyzer shortly before the issue is reproduced and stop the analyzer shortly after the issue is finished reproducing to minimize the size of the file. You can also run a packet analyzer between the nodes (eliminating the SDN from the equation) with: USD tcpdump -s 0 -i any -w /tmp/dump.pcap port 4789 Use a bandwidth measuring tool, such as iperf, to measure streaming throughput and UDP throughput. Run the tool from the pods first, then from the nodes, to locate any bottlenecks. For information on installing and using iperf, see this Red Hat Solution . 20.1.5. Using cookies to keep route statefulness OpenShift Container Platform provides sticky sessions, which enables stateful application traffic by ensuring all traffic hits the same endpoint. However, if the endpoint pod terminates, whether through restart, scaling, or a change in configuration, this statefulness can disappear. OpenShift Container Platform can use cookies to configure session persistence. The Ingress controller selects an endpoint to handle any user requests, and creates a cookie for the session. The cookie is passed back in the response to the request and the user sends the cookie back with the request in the session. The cookie tells the Ingress Controller which endpoint is handling the session, ensuring that client requests use the cookie so that they are routed to the same pod. Note Cookies cannot be set on passthrough routes, because the HTTP traffic cannot be seen. Instead, a number is calculated based on the source IP address, which determines the backend. If backends change, the traffic can be directed to the wrong server, making it less sticky. If you are using a load balancer, which hides source IP, the same number is set for all connections and traffic is sent to the same pod. 20.1.5.1. Annotating a route with a cookie You can set a cookie name to overwrite the default, auto-generated one for the route. This allows the application receiving route traffic to know the cookie name. By deleting the cookie it can force the request to re-choose an endpoint. So, if a server was overloaded it tries to remove the requests from the client and redistribute them. Procedure Annotate the route with the specified cookie name: USD oc annotate route <route_name> router.openshift.io/cookie_name="<cookie_name>" where: <route_name> Specifies the name of the route. <cookie_name> Specifies the name for the cookie. For example, to annotate the route my_route with the cookie name my_cookie : USD oc annotate route my_route router.openshift.io/cookie_name="my_cookie" Capture the route hostname in a variable: USD ROUTE_NAME=USD(oc get route <route_name> -o jsonpath='{.spec.host}') where: <route_name> Specifies the name of the route. Save the cookie, and then access the route: USD curl USDROUTE_NAME -k -c /tmp/cookie_jar Use the cookie saved by the command when connecting to the route: USD curl USDROUTE_NAME -k -b /tmp/cookie_jar 20.1.6. Path-based routes Path-based routes specify a path component that can be compared against a URL, which requires that the traffic for the route be HTTP based. Thus, multiple routes can be served using the same hostname, each with a different path. Routers should match routes based on the most specific path to the least. However, this depends on the router implementation. The following table shows example routes and their accessibility: Table 20.1. Route availability Route When Compared to Accessible www.example.com/test www.example.com/test Yes www.example.com No www.example.com/test and www.example.com www.example.com/test Yes www.example.com Yes www.example.com www.example.com/text Yes (Matched by the host, not the route) www.example.com Yes An unsecured route with a path apiVersion: route.openshift.io/v1 kind: Route metadata: name: route-unsecured spec: host: www.example.com path: "/test" 1 to: kind: Service name: service-name 1 The path is the only added attribute for a path-based route. Note Path-based routing is not available when using passthrough TLS, as the router does not terminate TLS in that case and cannot read the contents of the request. 20.1.7. Route-specific annotations The Ingress Controller can set the default options for all the routes it exposes. An individual route can override some of these defaults by providing specific configurations in its annotations. Red Hat does not support adding a route annotation to an operator-managed route. Important To create a whitelist with multiple source IPs or subnets, use a space-delimited list. Any other delimiter type causes the list to be ignored without a warning or error message. Table 20.2. Route annotations Variable Description Environment variable used as default haproxy.router.openshift.io/balance Sets the load-balancing algorithm. Available options are random , source , roundrobin , and leastconn . The default value is random . ROUTER_TCP_BALANCE_SCHEME for passthrough routes. Otherwise, use ROUTER_LOAD_BALANCE_ALGORITHM . haproxy.router.openshift.io/disable_cookies Disables the use of cookies to track related connections. If set to 'true' or 'TRUE' , the balance algorithm is used to choose which back-end serves connections for each incoming HTTP request. router.openshift.io/cookie_name Specifies an optional cookie to use for this route. The name must consist of any combination of upper and lower case letters, digits, "_", and "-". The default is the hashed internal key name for the route. haproxy.router.openshift.io/pod-concurrent-connections Sets the maximum number of connections that are allowed to a backing pod from a router. Note: If there are multiple pods, each can have this many connections. If you have multiple routers, there is no coordination among them, each may connect this many times. If not set, or set to 0, there is no limit. haproxy.router.openshift.io/rate-limit-connections Setting 'true' or 'TRUE' enables rate limiting functionality which is implemented through stick-tables on the specific backend per route. Note: Using this annotation provides basic protection against distributed denial-of-service (DDoS) attacks. haproxy.router.openshift.io/rate-limit-connections.concurrent-tcp Limits the number of concurrent TCP connections made through the same source IP address. It accepts a numeric value. Note: Using this annotation provides basic protection against distributed denial-of-service (DDoS) attacks. haproxy.router.openshift.io/rate-limit-connections.rate-http Limits the rate at which a client with the same source IP address can make HTTP requests. It accepts a numeric value. Note: Using this annotation provides basic protection against distributed denial-of-service (DDoS) attacks. haproxy.router.openshift.io/rate-limit-connections.rate-tcp Limits the rate at which a client with the same source IP address can make TCP connections. It accepts a numeric value. Note: Using this annotation provides basic protection against distributed denial-of-service (DDoS) attacks. haproxy.router.openshift.io/timeout Sets a server-side timeout for the route. (TimeUnits) ROUTER_DEFAULT_SERVER_TIMEOUT haproxy.router.openshift.io/timeout-tunnel This timeout applies to a tunnel connection, for example, WebSocket over cleartext, edge, reencrypt, or passthrough routes. With cleartext, edge, or reencrypt route types, this annotation is applied as a timeout tunnel with the existing timeout value. For the passthrough route types, the annotation takes precedence over any existing timeout value set. ROUTER_DEFAULT_TUNNEL_TIMEOUT ingresses.config/cluster ingress.operator.openshift.io/hard-stop-after You can set either an IngressController or the ingress config . This annotation redeploys the router and configures the HA proxy to emit the haproxy hard-stop-after global option, which defines the maximum time allowed to perform a clean soft-stop. ROUTER_HARD_STOP_AFTER router.openshift.io/haproxy.health.check.interval Sets the interval for the back-end health checks. (TimeUnits) ROUTER_BACKEND_CHECK_INTERVAL haproxy.router.openshift.io/ip_whitelist Sets a whitelist for the route. The whitelist is a space-separated list of IP addresses and CIDR ranges for the approved source addresses. Requests from IP addresses that are not in the whitelist are dropped. The maximum number of IP addresses and CIDR ranges allowed in a whitelist is 61. haproxy.router.openshift.io/hsts_header Sets a Strict-Transport-Security header for the edge terminated or re-encrypt route. haproxy.router.openshift.io/log-send-hostname Sets the hostname field in the Syslog header. Uses the hostname of the system. log-send-hostname is enabled by default if any Ingress API logging method, such as sidecar or Syslog facility, is enabled for the router. haproxy.router.openshift.io/rewrite-target Sets the rewrite path of the request on the backend. router.openshift.io/cookie-same-site Sets a value to restrict cookies. The values are: Lax : cookies are transferred between the visited site and third-party sites. Strict : cookies are restricted to the visited site. None : cookies are restricted to the visited site. This value is applicable to re-encrypt and edge routes only. For more information, see the SameSite cookies documentation . haproxy.router.openshift.io/set-forwarded-headers Sets the policy for handling the Forwarded and X-Forwarded-For HTTP headers per route. The values are: append : appends the header, preserving any existing header. This is the default value. replace : sets the header, removing any existing header. never : never sets the header, but preserves any existing header. if-none : sets the header if it is not already set. ROUTER_SET_FORWARDED_HEADERS Note Environment variables cannot be edited. Router timeout variables TimeUnits are represented by a number followed by the unit: us *(microseconds), ms (milliseconds, default), s (seconds), m (minutes), h *(hours), d (days). The regular expression is: [1-9][0-9]*( us \| ms \| s \| m \| h \| d ). Variable Default Description ROUTER_BACKEND_CHECK_INTERVAL 5000ms Length of time between subsequent liveness checks on back ends. ROUTER_CLIENT_FIN_TIMEOUT 1s Controls the TCP FIN timeout period for the client connecting to the route. If the FIN sent to close the connection does not answer within the given time, HAProxy closes the connection. This is harmless if set to a low value and uses fewer resources on the router. ROUTER_DEFAULT_CLIENT_TIMEOUT 30s Length of time that a client has to acknowledge or send data. ROUTER_DEFAULT_CONNECT_TIMEOUT 5s The maximum connection time. ROUTER_DEFAULT_SERVER_FIN_TIMEOUT 1s Controls the TCP FIN timeout from the router to the pod backing the route. ROUTER_DEFAULT_SERVER_TIMEOUT 30s Length of time that a server has to acknowledge or send data. ROUTER_DEFAULT_TUNNEL_TIMEOUT 1h Length of time for TCP or WebSocket connections to remain open. This timeout period resets whenever HAProxy reloads. ROUTER_SLOWLORIS_HTTP_KEEPALIVE 300s Set the maximum time to wait for a new HTTP request to appear. If this is set too low, it can cause problems with browsers and applications not expecting a small keepalive value. Some effective timeout values can be the sum of certain variables, rather than the specific expected timeout. For example, ROUTER_SLOWLORIS_HTTP_KEEPALIVE adjusts timeout http-keep-alive . It is set to 300s by default, but HAProxy also waits on tcp-request inspect-delay , which is set to 5s . In this case, the overall timeout would be 300s plus 5s . ROUTER_SLOWLORIS_TIMEOUT 10s Length of time the transmission of an HTTP request can take. RELOAD_INTERVAL 5s Allows the minimum frequency for the router to reload and accept new changes. ROUTER_METRICS_HAPROXY_TIMEOUT 5s Timeout for the gathering of HAProxy metrics. A route setting custom timeout apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/timeout: 5500ms 1 ... 1 Specifies the new timeout with HAProxy supported units ( us , ms , s , m , h , d ). If the unit is not provided, ms is the default. Note Setting a server-side timeout value for passthrough routes too low can cause WebSocket connections to timeout frequently on that route. A route that allows only one specific IP address metadata: annotations: haproxy.router.openshift.io/ip_whitelist: 192.168.1.10 A route that allows several IP addresses metadata: annotations: haproxy.router.openshift.io/ip_whitelist: 192.168.1.10 192.168.1.11 192.168.1.12 A route that allows an IP address CIDR network metadata: annotations: haproxy.router.openshift.io/ip_whitelist: 192.168.1.0/24 A route that allows both IP an address and IP address CIDR networks metadata: annotations: haproxy.router.openshift.io/ip_whitelist: 180.5.61.153 192.168.1.0/24 10.0.0.0/8 A route specifying a rewrite target apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/rewrite-target: / 1 ... 1 Sets / as rewrite path of the request on the backend. Setting the haproxy.router.openshift.io/rewrite-target annotation on a route specifies that the Ingress Controller should rewrite paths in HTTP requests using this route before forwarding the requests to the backend application. The part of the request path that matches the path specified in spec.path is replaced with the rewrite target specified in the annotation. The following table provides examples of the path rewriting behavior for various combinations of spec.path , request path, and rewrite target. Table 20.3. rewrite-target examples: Route.spec.path Request path Rewrite target Forwarded request path /foo /foo / / /foo /foo/ / / /foo /foo/bar / /bar /foo /foo/bar/ / /bar/ /foo /foo /bar /bar /foo /foo/ /bar /bar/ /foo /foo/bar /baz /baz/bar /foo /foo/bar/ /baz /baz/bar/ /foo/ /foo / N/A (request path does not match route path) /foo/ /foo/ / / /foo/ /foo/bar / /bar 20.1.8. Configuring the route admission policy Administrators and application developers can run applications in multiple namespaces with the same domain name. This is for organizations where multiple teams develop microservices that are exposed on the same hostname. Warning Allowing claims across namespaces should only be enabled for clusters with trust between namespaces, otherwise a malicious user could take over a hostname. For this reason, the default admission policy disallows hostname claims across namespaces. Prerequisites Cluster administrator privileges. Procedure Edit the .spec.routeAdmission field of the ingresscontroller resource variable using the following command: USD oc -n openshift-ingress-operator patch ingresscontroller/default --patch '{"spec":{"routeAdmission":{"namespaceOwnership":"InterNamespaceAllowed"}}}' --type=merge Sample Ingress Controller configuration spec: routeAdmission: namespaceOwnership: InterNamespaceAllowed ... Tip You can alternatively apply the following YAML to configure the route admission policy: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: routeAdmission: namespaceOwnership: InterNamespaceAllowed 20.1.9. Creating a route through an Ingress object Some ecosystem components have an integration with Ingress resources but not with Route resources. To cover this case, OpenShift Container Platform automatically creates managed route objects when an Ingress object is created. These route objects are deleted when the corresponding Ingress objects are deleted. Procedure Define an Ingress object in the OpenShift Container Platform console or by entering the oc create command: YAML Definition of an Ingress apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: frontend annotations: route.openshift.io/termination: "reencrypt" 1 spec: rules: - host: www.example.com 2 http: paths: - backend: service: name: frontend port: number: 443 path: / pathType: Prefix tls: - hosts: - www.example.com secretName: example-com-tls-certificate 1 The route.openshift.io/termination annotation can be used to configure the spec.tls.termination field of the Route as Ingress has no field for this. The accepted values are edge , passthrough and reencrypt . All other values are silently ignored. When the annotation value is unset, edge is the default route. The TLS certificate details must be defined in the template file to implement the default edge route and to prevent producing an insecure route. 2 When working with an Ingress object, you must specify an explicit host name, unlike when working with routes. You can use the <host_name>.<cluster_ingress_domain> syntax, for example apps.openshiftdemos.com , to take advantage of the *.<cluster_ingress_domain> wildcard DNS record and serving certificate for the cluster. Otherwise, you must ensure that there is a DNS record for the chosen hostname. If you specify the passthrough value in the route.openshift.io/termination annotation, set path to '' and pathType to ImplementationSpecific in the spec: spec: rules: - host: www.example.com http: paths: - path: '' pathType: ImplementationSpecific backend: service: name: frontend port: number: 443 USD oc apply -f ingress.yaml List your routes: USD oc get routes The result includes an autogenerated route whose name starts with frontend- : NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD frontend-gnztq www.example.com frontend 443 reencrypt/Redirect None If you inspect this route, it looks this: YAML Definition of an autogenerated route apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend-gnztq ownerReferences: - apiVersion: networking.k8s.io/v1 controller: true kind: Ingress name: frontend uid: 4e6c59cc-704d-4f44-b390-617d879033b6 spec: host: www.example.com path: / port: targetPort: https tls: certificate: | -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- insecureEdgeTerminationPolicy: Redirect key: | -----BEGIN RSA PRIVATE KEY----- [...] -----END RSA PRIVATE KEY----- termination: reencrypt to: kind: Service name: frontend 20.1.10. Creating a route using the default certificate through an Ingress object If you create an Ingress object without specifying any TLS configuration, OpenShift Container Platform generates an insecure route. To create an Ingress object that generates a secure, edge-terminated route using the default ingress certificate, you can specify an empty TLS configuration as follows. Prerequisites You have a service that you want to expose. You have access to the OpenShift CLI ( oc ). Procedure Create a YAML file for the Ingress object. In this example, the file is called example-ingress.yaml : YAML definition of an Ingress object apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: frontend ... spec: rules: ... tls: - {} 1 1 Use this exact syntax to specify TLS without specifying a custom certificate. Create the Ingress object by running the following command: USD oc create -f example-ingress.yaml Verification Verify that OpenShift Container Platform has created the expected route for the Ingress object by running the following command: USD oc get routes -o yaml Example output apiVersion: v1 items: - apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend-j9sdd 1 ... spec: ... tls: 2 insecureEdgeTerminationPolicy: Redirect termination: edge 3 ... 1 The name of the route includes the name of the Ingress object followed by a random suffix. 2 In order to use the default certificate, the route should not specify spec.certificate . 3 The route should specify the edge termination policy. 20.1.11. Configuring the OpenShift Container Platform Ingress Controller for dual-stack networking If your OpenShift Container Platform cluster is configured for IPv4 and IPv6 dual-stack networking, your cluster is externally reachable by OpenShift Container Platform routes. The Ingress Controller automatically serves services that have both IPv4 and IPv6 endpoints, but you can configure the Ingress Controller for single-stack or dual-stack services. Prerequisites You deployed an OpenShift Container Platform cluster on bare metal. You installed the OpenShift CLI ( oc ). Procedure To have the Ingress Controller serve traffic over IPv4/IPv6 to a workload, you can create a service YAML file or modify an existing service YAML file by setting the ipFamilies and ipFamilyPolicy fields. For example: Sample service YAML file apiVersion: v1 kind: Service metadata: creationTimestamp: yyyy-mm-ddT00:00:00Z labels: name: <service_name> manager: kubectl-create operation: Update time: yyyy-mm-ddT00:00:00Z name: <service_name> namespace: <namespace_name> resourceVersion: "<resource_version_number>" selfLink: "/api/v1/namespaces/<namespace_name>/services/<service_name>" uid: <uid_number> spec: clusterIP: 172.30.0.0/16 clusterIPs: 1 - 172.30.0.0/16 - <second_IP_address> ipFamilies: 2 - IPv4 - IPv6 ipFamilyPolicy: RequireDualStack 3 ports: - port: 8080 protocol: TCP targetport: 8080 selector: name: <namespace_name> sessionAffinity: None type: ClusterIP status: loadbalancer: {} 1 In a dual-stack instance, there are two different clusterIPs provided. 2 For a single-stack instance, enter IPv4 or IPv6 . For a dual-stack instance, enter both IPv4 and IPv6 . 3 For a single-stack instance, enter SingleStack . For a dual-stack instance, enter RequireDualStack . These resources generate corresponding endpoints . The Ingress Controller now watches endpointslices . To view endpoints , enter the following command: USD oc get endpoints To view endpointslices , enter the following command: USD oc get endpointslices Additional resources Specifying an alternative cluster domain using the appsDomain option 20.2. Secured routes Secure routes provide the ability to use several types of TLS termination to serve certificates to the client. The following sections describe how to create re-encrypt, edge, and passthrough routes with custom certificates. Important If you create routes in Microsoft Azure through public endpoints, the resource names are subject to restriction. You cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. 20.2.1. Creating a re-encrypt route with a custom certificate You can configure a secure route using reencrypt TLS termination with a custom certificate by using the oc create route command. Prerequisites You must have a certificate/key pair in PEM-encoded files, where the certificate is valid for the route host. You may have a separate CA certificate in a PEM-encoded file that completes the certificate chain. You must have a separate destination CA certificate in a PEM-encoded file. You must have a service that you want to expose. Note Password protected key files are not supported. To remove a passphrase from a key file, use the following command: USD openssl rsa -in password_protected_tls.key -out tls.key Procedure This procedure creates a Route resource with a custom certificate and reencrypt TLS termination. The following assumes that the certificate/key pair are in the tls.crt and tls.key files in the current working directory. You must also specify a destination CA certificate to enable the Ingress Controller to trust the service's certificate. You may also specify a CA certificate if needed to complete the certificate chain. Substitute the actual path names for tls.crt , tls.key , cacert.crt , and (optionally) ca.crt . Substitute the name of the Service resource that you want to expose for frontend . Substitute the appropriate hostname for www.example.com . Create a secure Route resource using reencrypt TLS termination and a custom certificate: USD oc create route reencrypt --service=frontend --cert=tls.crt --key=tls.key --dest-ca-cert=destca.crt --ca-cert=ca.crt --hostname=www.example.com If you examine the resulting Route resource, it should look similar to the following: YAML Definition of the Secure Route apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend spec: host: www.example.com to: kind: Service name: frontend tls: termination: reencrypt key: |- -----BEGIN PRIVATE KEY----- [...] -----END PRIVATE KEY----- certificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- caCertificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- destinationCACertificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- See oc create route reencrypt --help for more options. 20.2.2. Creating an edge route with a custom certificate You can configure a secure route using edge TLS termination with a custom certificate by using the oc create route command. With an edge route, the Ingress Controller terminates TLS encryption before forwarding traffic to the destination pod. The route specifies the TLS certificate and key that the Ingress Controller uses for the route. Prerequisites You must have a certificate/key pair in PEM-encoded files, where the certificate is valid for the route host. You may have a separate CA certificate in a PEM-encoded file that completes the certificate chain. You must have a service that you want to expose. Note Password protected key files are not supported. To remove a passphrase from a key file, use the following command: USD openssl rsa -in password_protected_tls.key -out tls.key Procedure This procedure creates a Route resource with a custom certificate and edge TLS termination. The following assumes that the certificate/key pair are in the tls.crt and tls.key files in the current working directory. You may also specify a CA certificate if needed to complete the certificate chain. Substitute the actual path names for tls.crt , tls.key , and (optionally) ca.crt . Substitute the name of the service that you want to expose for frontend . Substitute the appropriate hostname for www.example.com . Create a secure Route resource using edge TLS termination and a custom certificate. USD oc create route edge --service=frontend --cert=tls.crt --key=tls.key --ca-cert=ca.crt --hostname=www.example.com If you examine the resulting Route resource, it should look similar to the following: YAML Definition of the Secure Route apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend spec: host: www.example.com to: kind: Service name: frontend tls: termination: edge key: |- -----BEGIN PRIVATE KEY----- [...] -----END PRIVATE KEY----- certificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- caCertificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- See oc create route edge --help for more options. 20.2.3. Creating a passthrough route You can configure a secure route using passthrough termination by using the oc create route command. With passthrough termination, encrypted traffic is sent straight to the destination without the router providing TLS termination. Therefore no key or certificate is required on the route. Prerequisites You must have a service that you want to expose. Procedure Create a Route resource: USD oc create route passthrough route-passthrough-secured --service=frontend --port=8080 If you examine the resulting Route resource, it should look similar to the following: A Secured Route Using Passthrough Termination apiVersion: route.openshift.io/v1 kind: Route metadata: name: route-passthrough-secured 1 spec: host: www.example.com port: targetPort: 8080 tls: termination: passthrough 2 insecureEdgeTerminationPolicy: None 3 to: kind: Service name: frontend 1 The name of the object, which is limited to 63 characters. 2 The termination field is set to passthrough . This is the only required tls field. 3 Optional insecureEdgeTerminationPolicy . The only valid values are None , Redirect , or empty for disabled. The destination pod is responsible for serving certificates for the traffic at the endpoint. This is currently the only method that can support requiring client certificates, also known as two-way authentication.
|
[
"oc new-project hello-openshift",
"oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/hello-openshift/hello-pod.json",
"oc expose pod/hello-openshift",
"oc expose svc hello-openshift",
"apiVersion: route.openshift.io/v1 kind: Route metadata: name: hello-openshift spec: host: hello-openshift-hello-openshift.<Ingress_Domain> 1 port: targetPort: 8080 2 to: kind: Service name: hello-openshift",
"oc get ingresses.config/cluster -o jsonpath={.spec.domain}",
"oc annotate route <route_name> --overwrite haproxy.router.openshift.io/timeout=<timeout><time_unit> 1",
"oc annotate route myroute --overwrite haproxy.router.openshift.io/timeout=2s",
"oc annotate route <route_name> -n <namespace> --overwrite=true \"haproxy.router.openshift.io/hsts_header\"=\"max-age=31536000;\\ 1 includeSubDomains;preload\"",
"apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/hsts_header: max-age=31536000;includeSubDomains;preload 1 2 3 spec: host: def.abc.com tls: termination: \"reencrypt\" wildcardPolicy: \"Subdomain\"",
"oc annotate route <route_name> -n <namespace> --overwrite=true \"haproxy.router.openshift.io/hsts_header\"=\"max-age=0\"",
"metadata: annotations: haproxy.router.openshift.io/hsts_header: max-age=0",
"oc annotate <route> --all -n <namespace> --overwrite=true \"haproxy.router.openshift.io/hsts_header\"=\"max-age=0\"",
"oc get route --all-namespaces -o go-template='{{range .items}}{{if .metadata.annotations}}{{USDa := index .metadata.annotations \"haproxy.router.openshift.io/hsts_header\"}}{{USDn := .metadata.name}}{{with USDa}}Name: {{USDn}} HSTS: {{USDa}}{{\"\\n\"}}{{else}}{{\"\"}}{{end}}{{end}}{{end}}'",
"Name: routename HSTS: max-age=0",
"oc edit ingresses.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: domain: 'hello-openshift-default.apps.username.devcluster.openshift.com' requiredHSTSPolicies: 1 - domainPatterns: 2 - '*hello-openshift-default.apps.username.devcluster.openshift.com' - '*hello-openshift-default2.apps.username.devcluster.openshift.com' namespaceSelector: 3 matchLabels: myPolicy: strict maxAge: 4 smallestMaxAge: 1 largestMaxAge: 31536000 preloadPolicy: RequirePreload 5 includeSubDomainsPolicy: RequireIncludeSubDomains 6 - domainPatterns: 7 - 'abc.example.com' - '*xyz.example.com' namespaceSelector: matchLabels: {} maxAge: {} preloadPolicy: NoOpinion includeSubDomainsPolicy: RequireNoIncludeSubDomains",
"oc annotate route --all --all-namespaces --overwrite=true \"haproxy.router.openshift.io/hsts_header\"=\"max-age=31536000\"",
"oc annotate route --all -n my-namespace --overwrite=true \"haproxy.router.openshift.io/hsts_header\"=\"max-age=31536000\"",
"oc get clusteroperator/ingress -n openshift-ingress-operator -o jsonpath='{range .spec.requiredHSTSPolicies[*]}{.spec.requiredHSTSPolicies.maxAgePolicy.largestMaxAge}{\"\\n\"}{end}'",
"oc get route --all-namespaces -o go-template='{{range .items}}{{if .metadata.annotations}}{{USDa := index .metadata.annotations \"haproxy.router.openshift.io/hsts_header\"}}{{USDn := .metadata.name}}{{with USDa}}Name: {{USDn}} HSTS: {{USDa}}{{\"\\n\"}}{{else}}{{\"\"}}{{end}}{{end}}{{end}}'",
"Name: <_routename_> HSTS: max-age=31536000;preload;includeSubDomains",
"tcpdump -s 0 -i any -w /tmp/dump.pcap host <podip 1> && host <podip 2> 1",
"tcpdump -s 0 -i any -w /tmp/dump.pcap port 4789",
"oc annotate route <route_name> router.openshift.io/cookie_name=\"<cookie_name>\"",
"oc annotate route my_route router.openshift.io/cookie_name=\"my_cookie\"",
"ROUTE_NAME=USD(oc get route <route_name> -o jsonpath='{.spec.host}')",
"curl USDROUTE_NAME -k -c /tmp/cookie_jar",
"curl USDROUTE_NAME -k -b /tmp/cookie_jar",
"apiVersion: route.openshift.io/v1 kind: Route metadata: name: route-unsecured spec: host: www.example.com path: \"/test\" 1 to: kind: Service name: service-name",
"apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/timeout: 5500ms 1",
"metadata: annotations: haproxy.router.openshift.io/ip_whitelist: 192.168.1.10",
"metadata: annotations: haproxy.router.openshift.io/ip_whitelist: 192.168.1.10 192.168.1.11 192.168.1.12",
"metadata: annotations: haproxy.router.openshift.io/ip_whitelist: 192.168.1.0/24",
"metadata: annotations: haproxy.router.openshift.io/ip_whitelist: 180.5.61.153 192.168.1.0/24 10.0.0.0/8",
"apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/rewrite-target: / 1",
"oc -n openshift-ingress-operator patch ingresscontroller/default --patch '{\"spec\":{\"routeAdmission\":{\"namespaceOwnership\":\"InterNamespaceAllowed\"}}}' --type=merge",
"spec: routeAdmission: namespaceOwnership: InterNamespaceAllowed",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: routeAdmission: namespaceOwnership: InterNamespaceAllowed",
"apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: frontend annotations: route.openshift.io/termination: \"reencrypt\" 1 spec: rules: - host: www.example.com 2 http: paths: - backend: service: name: frontend port: number: 443 path: / pathType: Prefix tls: - hosts: - www.example.com secretName: example-com-tls-certificate",
"spec: rules: - host: www.example.com http: paths: - path: '' pathType: ImplementationSpecific backend: service: name: frontend port: number: 443",
"oc apply -f ingress.yaml",
"oc get routes",
"NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD frontend-gnztq www.example.com frontend 443 reencrypt/Redirect None",
"apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend-gnztq ownerReferences: - apiVersion: networking.k8s.io/v1 controller: true kind: Ingress name: frontend uid: 4e6c59cc-704d-4f44-b390-617d879033b6 spec: host: www.example.com path: / port: targetPort: https tls: certificate: | -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- insecureEdgeTerminationPolicy: Redirect key: | -----BEGIN RSA PRIVATE KEY----- [...] -----END RSA PRIVATE KEY----- termination: reencrypt to: kind: Service name: frontend",
"apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: frontend spec: rules: tls: - {} 1",
"oc create -f example-ingress.yaml",
"oc get routes -o yaml",
"apiVersion: v1 items: - apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend-j9sdd 1 spec: tls: 2 insecureEdgeTerminationPolicy: Redirect termination: edge 3",
"apiVersion: v1 kind: Service metadata: creationTimestamp: yyyy-mm-ddT00:00:00Z labels: name: <service_name> manager: kubectl-create operation: Update time: yyyy-mm-ddT00:00:00Z name: <service_name> namespace: <namespace_name> resourceVersion: \"<resource_version_number>\" selfLink: \"/api/v1/namespaces/<namespace_name>/services/<service_name>\" uid: <uid_number> spec: clusterIP: 172.30.0.0/16 clusterIPs: 1 - 172.30.0.0/16 - <second_IP_address> ipFamilies: 2 - IPv4 - IPv6 ipFamilyPolicy: RequireDualStack 3 ports: - port: 8080 protocol: TCP targetport: 8080 selector: name: <namespace_name> sessionAffinity: None type: ClusterIP status: loadbalancer: {}",
"oc get endpoints",
"oc get endpointslices",
"openssl rsa -in password_protected_tls.key -out tls.key",
"oc create route reencrypt --service=frontend --cert=tls.crt --key=tls.key --dest-ca-cert=destca.crt --ca-cert=ca.crt --hostname=www.example.com",
"apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend spec: host: www.example.com to: kind: Service name: frontend tls: termination: reencrypt key: |- -----BEGIN PRIVATE KEY----- [...] -----END PRIVATE KEY----- certificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- caCertificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- destinationCACertificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE-----",
"openssl rsa -in password_protected_tls.key -out tls.key",
"oc create route edge --service=frontend --cert=tls.crt --key=tls.key --ca-cert=ca.crt --hostname=www.example.com",
"apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend spec: host: www.example.com to: kind: Service name: frontend tls: termination: edge key: |- -----BEGIN PRIVATE KEY----- [...] -----END PRIVATE KEY----- certificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- caCertificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE-----",
"oc create route passthrough route-passthrough-secured --service=frontend --port=8080",
"apiVersion: route.openshift.io/v1 kind: Route metadata: name: route-passthrough-secured 1 spec: host: www.example.com port: targetPort: 8080 tls: termination: passthrough 2 insecureEdgeTerminationPolicy: None 3 to: kind: Service name: frontend"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/networking/configuring-routes
|
8.108. lvm2
|
8.108. lvm2 8.108.1. RHBA-2013:1704 - lvm2 bug fix end enhancement update Updated lvm2 packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The lvm2 packages include all of the support for handling read and write operations on physical volumes, creating volume groups from one or more physical volumes and creating one or more logical volumes in volume groups. Bug Fixes BZ# 820991 When visible clustered volume groups (VGs) were present in the system, it was not possible to silently skip them with proper return error code while the non-clustered locking type was used. To fix this bug, the "--ignoreskippedcluster" option has been added for several LVM commands; namely pvs, vgs, lvs, pvdisplay, vgdisplay, lvdisplay, vgchange, and lvchange. With this option, the clustered VGs are skipped correctly without any warning or error messages while the return error code also does not depend on these clustered VGs. BZ# 834327 Previously, the lvremove command failed to remove a virtual snapshot device if this device was still open. Consequently, the <virtual_snashot_name>_vorigin device-mapper device was left on the system after the failed removal. A manual remove with use of dmsetup was required to discard this device. With this update, lvremove has been modified to properly check the LV open count status before proceeding with the removal operation. BZ# 861227 Previously, when the lvconvert command was used with the "--stripes" option, the required supplementary options, such as "--mirrors" or "--repair", "thinpool", or "type raid*/mirror", were not enforced. Consequently, calling "lvconvert --stripes" without accompanying conversion instructions led to an incomplete conversion. With this update, a condition has been added to enforce the correct syntax. As a result, an error message is now displayed in the described scenario. BZ# 880414 Previously certain lvm2app functions were returning values in sectors instead of bytes. This behavior applied for values of origin_size, vg_extent_size, stripe_size, region_size, chunk_size, seg_start, and pvseg_size. Consequently, the returned lvm2app results were inconsistent and therefore misleading. This behavior has been changed and all lvm2app values are now returning byte values. BZ# 902538 The lvm2 tools determine the PowerPath major number by searching for an "emcpower" line in the /proc/devices file. Previously, some versions of PowerPath used the ID string "power2". As a consequence, on systems with such an identifier, PowerPath devices were not given the expected precedence over PowerPath components which exhibit the same physical volume UUID. With this update, detection of EMC power devices works as expected, and the priority of devices is now set properly. BZ# 902806 Prior to this update, the lvm2 dmeventd daemon attempted to reset to C locales only through the LANG environmental variable. However, when the system sets locales using the LC_ALL variable, this variable has a higher priority than the LANG variable, which leads to an extensive memory consumption. With this update, LC_ALL has been reset to C instead of LANG, thus reducing the memory consumption. BZ# 905254 With this update, a specific diagnostic message has been added for the case when the lvmetad deamon was already running or its pidfile was locked for any other reason. When trying to start lvmetad while it is already running now returns a message with a clear indication of the problem: Failed to acquire lock on /var/run/lvmetad.pid. Already running? BZ# 907487 Previously, the 'vgreduce --removemissing' command could not be used when missing physical volumes were still used by RAID logical volumes. Now, it is possible for 'vgreduce --removemissing' to replace the failed physical volume with an 'error' segment within the affected RAID logical volumes and remove the PV from the volume group. However, in most cases it is better to replace a failed RAID device with a spare one (with use of 'lvconvert --repair') if possible. BZ# 910104 Under certain circumstances, cached metadata in the lvmetad daemon could have leaked during metadata updates. With this update, lvmetad has been fixed to prevent the leak. BZ# 913644 Previously, if a device had failed after the vgexport command was issued, it was impossible to import the volume group. Additionally, this failure to import also meant it was impossible to repair the volume group.It is now possible to use the '--force' option with vgimport to import volume groups even if there are devices missing. BZ# 914143 When LVM scans devices for LVM meta data, it applies several filters, such as the multipath filter, MD component filter, or partition signature filter. Previously, the order in which these filters were applied caused that multipath filter failed to filter out a multipath component because the device was accessed by other filters. Consequently, I/O errors occurred if the path was not accessible. With this update, the order of filtering has been changed and the multipath filter now works as expected. BZ# 919604 The 'raid1' type can be used to set the device fault tolerance for thinpool logical volumes. It is no longer possible to create thinpools on top of logical volumes of 'mirror' segment type. The existing thinpools with data or meta data areas of 'mirror' segment type will still function, however, it is recommended to convert these to 'raid1' with use of the 'lvconvert' command. BZ# 928537 When using the pvcreate command with the --restorefile and --uuid options while the supplied UUID was incorrect, an internal error message about a memory leak was issued: With this update, the memory leak has been fixed and the error message is no longer displayed. BZ# 953612 When updating the device-mapper-event package to a later version, the package update script attempts to restart running dmeventd instance and to replace it with the new dmeventd daemon. However, the version of dmeventd does not recognize the notification for restart and therefore a manual intervention is needed in this situation. Previously, the following warning message was displayed: WARNING: The running dmeventd instance is too old In order to provide more precise information and advise for the required action, the following message has been added for the described case: Failed to restart dmeventd daemon. Please, try manual restart BZ# 953867 When using the lvmetad daemon together with the accompanying LVM autoactivation feature, the logical volumes on top of encrypted devices were not automatically activated during system boot. This was caused by ignoring the extra udev event that was artificially generated during system boot to initialize all existing devices. This bug has been fixed, and LVM now properly recognizes the udev event used to initialize the devices at boot, including encrypted devices. BZ# 954061 When using the lvmetad daemon together with the accompanying LVM autoactivation feature, the device-mapper devices representing the logical volumes were not refreshed after the underlying PV was unplugged or deactivated and then plugged back or activated. This was caused by assigning a different major and minor pair to identify the reconnected device, while LVs mapped on this device still referenced it with the original pairs. This bug has been fixed and LVM now always refreshes logical volumes on PV device after reactivation. BZ# 962436 Due to a regression introduced in LVM version 2.02.74, when the optimal_io_size device hint was smaller than the default pe_start size of 1 MiB, this optimal_io_size was ignored and the default size was used. With this update, the optimal_io_size is applied correctly to calculate the PV's pe_start value. BZ# 967247 Prior to this update, before adding additional images to a RAID logical volume, the available space was calculated incorrectly. Consequently, if the available space was insufficient, adding these images failed. This bug has been fixed and the calculation is now performed correctly. BZ# 973519 Previously, if the nohup command was used together with LVM commands that do not require input, nohup configured the standard input as write-only while LVM tried to reopen it also for reading. Consequently, the commands terminated with the following message: stdin: fdopen failed: Invalid argument LVM has been modified and if the standard input is already open write-only, LVM does not attempt to reopen it for reading. BZ# 976104 Previously, when converting a linear logical volume to a mirror logical volume, the preferred mirror segment type set in the /etc/lvm/lvm.conf configuration file was not always accepted. This behavior has been changed, and the segment type specified with the 'mirror_segtype_default' setting in configuration file is now applied as expected. BZ# 987693 Due to a code regression, a corruption of thin snapshot occurred when the underlaying thin-pool was created without the '--zero' option. As a consequence, the first 4KB in the snapshot could have been invalided. This bug has been fixed and the snapshot is no longer corrupted in the aforementioned scenario. BZ# 989347 Due to an error in the LVM allocation code, lvm2 attempted free space allocation contiguous to an existing striped space. When trying to extend a 3-way striped logical volume using the lvextend command, the lvm2 utility terminated unexpectedly with a segmentation fault. With this update, the behavior of LVM has been modified, and lvextend now completes the extension without a segmentation fault. BZ# 995193 Previously, it was impossible to convert a volume group from clustered to non-clustered with a configuration setting of 'locking_type = 0'. Consequently, problems could arise if the cluster was unavailable and it was necessary to convert the volume group to non-clustered mode. With this update, LVM has been modified to make the aforementioned conversion possible. BZ# 995440 Prior to this update, the repair of inconsistent metadata used an inconsistent code path depending on whether the lvmetad daemon was running and enabled. Consequently, the lvmetad version of meta data repair failed to correct the meta data and a warning message was printed repeatedly by every command until the problem was manually fixed. With this update, the code paths have been reconciled. As a result, metadata inconsistencies are automatically repaired as appropriate, regardless of the lvmetad. BZ# 997188 When the lvm_list_pvs_free function from the lvm2app library was called on a system with no physical volumes, lvm2app code tried to free an internal structure that had already been freed before. Consequently, the function terminated with a segmentation fault. This bug has been fixed, and the segmentation fault no longer occurs when calling lvm_list_pvs_free. BZ# 1007406 When using LVM logical volumes on MD RAID devices as PVs and while the lvmetad daemon was enabled, the accompanying logical volume automatic activation sometimes left incomplete device-mapper devices on the system. Consequently, no further logical volumes could be activated without manual cleanup of the dangling device-mapper devices. This bug has been fixed, and dangling devices are no longer left on the system. BZ# 1009700 Previously, LVM commands could become unresponsive when attempting to read an LVM mirror just after a write failure but before the repair command handled the failure. With this update, a new 'ignore_lvm_mirrors' configuration option has been added to avoid this issue. Setting this option to '1' will cause LVM mirrors to be ignored and prevent the described problem. Ignoring LVM mirrors also means that it is impossible to stack volume groups on LVM mirrors. The aforementioned problem is not present with the LVM RAID types, like "raid1". It is recommended to use the RAID segment types especially when attempting to stack volume groups on top of mirrored logical volumes. BZ# 1016322 Prior to this update, a race condition could occur during the pool destruction in libdevmapper.so. Consequently, the lvmetad daemon sometimes terminated due to heap corruption, especially under heavier concurrent loads, such as multiple LVM commands executing at once. With this update, a correct locking has been introduced to fix the race condition. As a result, lvmetad no longer suffers heap corruption and subsequent crashes. BZ# 1020304 The blkdeactivate script iterates over the list of devices given to it as an argument and tries to unmount or deactivate them one by one. However, in case of failed unmount or deactivation, the iteration did not proceed. Consequently, blkdeactivate kept attempting to process the same device and entered an endless loop. This behavior has been fixed and if blkdeactivate fails to unmount or deactivate any of the devices, the processing of this device is properly skipped and blkdeactivate proceeds as expected. Enhancements BZ# 814737 With this update, lvm2 has been enhanced to support the creation of thin snapshots of existing non-thinly-provisioned logical volumes. Thin-pool can now be used for these snapshots of non-thin volumes, providing performance gains. Note that the current lvm2 version does not support the merge feature, so unlike with older lvm2 snapshots, an updated device cannot be merged back into its origin device. BZ# 820203 LVM now supports validating of configuration files and it can report any unrecognized entries or entries with wrong value types in addition to existing syntax checking. To support this feature, a new "config" configuration section has been added to the /etc/lvm/lvm.conf configuration file. This section has two configurables: "config/checks" which enables or disables the checking (enabled by default), and "config/abort_on_errors" which enables or disables immediate abort on any invalid configuration entry found (disabled by default). In addition, new options have been added to the "lvm dumpconfig" command that make use of the new configuration handling code introduced. The "lvm dumpconfig" now recognizes the following options: --type, --atversion, --ignoreadvanced, --ignoreunsupported, --mergedconfig, --withcomments, --withversions, and --validate. BZ# 888641 Previously, the scm (Storage Class Memory) device was not internally recognized as partitionable device. Consequently, scm devices could not be used as physical volumes. With this update, scm device has been added to internal list of devices which are known to be partitionable. As a result, physical volumes are supported on scm partitions. Also, the new 'lvm devtypes' command has been added to list all known device types. BZ# 894136 When the lvmetad daemon is enabled, meta data is cached in RAM and most LVM commands do not consult on-disk meta data during normal operation. However, when meta data becomes corrupt on disk, LVM may not take a notice until a restart of lvmetad or a reboot. With this update, the vgck command used for checking VG consistency has been improved to detect such on-disk corruption even while lvmetad is active and the meta data is cached. As a result, users can issue the "vgck" command to verify consistency of on-disk meta data at any time, or they can arrange a periodic check using cron. BZ# 903249 If a device temporarily fails, the kernel notices the interruption and regards the device as disabled. Later, the kernel needs to be notified before it accepts the device as alive again. Previously, LVM did not recognize these changes and the 'lvs' command reported the device as operating normally even though the kernel still regarded the device as failed. With this update, 'lvs' has been modified to print a 'p' (partial) if a device is missing and also an 'r' (refresh/replace) if the device is present but the kernel regards the device as still disabled. When seeing an 'r' attribute for a RAID logical volume, the user can then decide if the array should be refreshed (reloaded into the kernel using 'lvchange --refresh') or if the device should be replaced. BZ# 916746 With this update, snapshot management handling of COW device size has been improved. This version trims the snapshot COW size to the maximal usable size to avoid unnecessary disk space consumption. It also stops snapshot monitoring once the maximal size is reached. BZ# 921280 Support for more complicated device stack for thinpool has been enhanced to properly resize more complex volumes like mirrors or raids. The new lvm2 version now supports thin data volume extension on raids. Support for mirrors has been deactivated. BZ# 921734 Prior to this update, , the "vgchange -c {y|n}" command call changed all volume groups accessible on the system to clustered or non-clustered. This may have caused an unintentional change and therefore the following prompt has been added to acknowledge this change: Change clustered property of all volumes groups? [y/n] This prompt is displayed only if the "vgchange -c {y|n}" is called without specifying target volume groups. BZ# 924137 The blkdeactivate utility now suppresses error and information messages from external tools that are called. Instead, only a summary message "done" or "skipped" is issued by blkdeactivate. To show these error messages if needed, a new -e/--errors switch has been added to blkdeactivate. Also, there's a new -v/--verbose switch to display any information messages from external tools together with any possible debug information. BZ# 958511 With this update, the blkdeactivate utility has been modified to correctly handle file systems mounted with bind (the 'mount -o bind' command). Now, blkdeactivate unmounts all such mount points correctly before trying to deactivate the volumes underneath. BZ# 969171 When creating many RAID logical volumes at the same time, it is possible for the background synchronization I/O necessary to calculate parity or copy mirror images to crowd out nominal I/O and cause subsequent logical volume creation to slow dramatically. It is now possible to throttle this initializing I/O via the '--raidmaxrecoveryrate' option to lvcreate. You can use the same argument with lvchange to alter the recovery I/O rate after a logical volume has been created. Reducing the recovery rate will prevent nominal I/O from being crowded out. Initialization will take longer, but the creation of many logical volumes will proceed more quickly. (BZ#969171) BZ# 985976 With this update, RAID logical volumes that are created with LVM can now be checked with use of scrubbing operations. Scrubbing operations are user-initiated checks to ensure that the RAID volume is consistent. There are two scrubbing operations that can be performed by appending the "check" or "repair" option to the "lvchange --syncaction" command. The "check" operation will examine the logical volume for any discrepancies, but will not correct them. The "repair" operation will correct any discrepancies found. BZ# 1003461 This update adds support for thin external origin to lvm2. This allows to use any LV as an external origin for a thin volume. All unprovisioned blocks are loaded from the external origin volume, while all once-written blocks are loaded from the thin volume. This functionality is provided by the 'lvcreate --snapshot' command and the 'lvconvert' command that converts any LV into a thin LV. BZ# 1003470 The error message 'Cannot change discards state for active pool volume "pool volume name"' has been improved to be more comprehensible: 'Cannot change support for discards while pool volume "pool volume name" is active'. BZ# 1007074 The repair of corrupted thin pool meta data is now provided by the 'lvconvert --repair' command, which is low-level manual repair. The thin pool meta data volume can be swapped out of the thin-pool LV via 'lvconvert --poolmetadata swapLV vg/pool' command and then the thin_check, thin_dump, and thin_repair commands can be used to run manual recover operation. After the repair, the thin pool meta data volume can be swapped back. This low-level repair should be only used when the user is fully aware of thin-pool functionality. BZ# 1017291 LVM now recognizes NVM Express devices as a proper block device type. Users of lvm2 are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
|
[
"Internal error: Unreleased memory pool(s) found."
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/lvm2
|
Chapter 5. Installing Logging
|
Chapter 5. Installing Logging OpenShift Dedicated Operators use custom resources (CR) to manage applications and their components. High-level configuration and settings are provided by the user within a CR. The Operator translates high-level directives into low-level actions, based on best practices embedded within the Operator's logic. A custom resource definition (CRD) defines a CR and lists all the configurations available to users of the Operator. Installing an Operator creates the CRDs, which are then used to generate CRs. Important You must install the Red Hat OpenShift Logging Operator after the log store Operator. You deploy logging by installing the Loki Operator or OpenShift Elasticsearch Operator to manage your log store, followed by the Red Hat OpenShift Logging Operator to manage the components of logging. You can use either the OpenShift Dedicated web console or the OpenShift Dedicated CLI to install or configure logging. Note The Logging 5.9 release does not contain an updated version of the OpenShift Elasticsearch Operator. If you currently use the OpenShift Elasticsearch Operator released with Logging 5.8, it will continue to work with Logging until the EOL of Logging 5.8. As an alternative to using the OpenShift Elasticsearch Operator to manage the default log storage, you can use the Loki Operator. For more information on the Logging lifecycle dates, see Platform Agnostic Operators . Tip You can alternatively apply all example objects. 5.1. Installing Logging with Elasticsearch using the web console You can use the OpenShift Dedicated web console to install the OpenShift Elasticsearch and Red Hat OpenShift Logging Operators. Elasticsearch is a memory-intensive application. By default, OpenShift Dedicated installs three Elasticsearch nodes with memory requests and limits of 16 GB. This initial set of three OpenShift Dedicated nodes might not have enough memory to run Elasticsearch within your cluster. If you experience memory issues that are related to Elasticsearch, add more Elasticsearch nodes to your cluster rather than increasing the memory on existing nodes. Note If you do not want to use the default Elasticsearch log store, you can remove the internal Elasticsearch logStore and Kibana visualization components from the ClusterLogging custom resource (CR). Removing these components is optional but saves resources. Prerequisites Ensure that you have the necessary persistent storage for Elasticsearch. Note that each Elasticsearch node requires its own storage volume. Note If you use a local volume for persistent storage, do not use a raw block volume, which is described with volumeMode: block in the LocalVolume object. Elasticsearch cannot use raw block volumes. Procedure To install the OpenShift Elasticsearch Operator and Red Hat OpenShift Logging Operator using the OpenShift Dedicated web console: Install the OpenShift Elasticsearch Operator: In the OpenShift Dedicated web console, click Operators OperatorHub . Choose OpenShift Elasticsearch Operator from the list of available Operators, and click Install . Ensure that the All namespaces on the cluster is selected under Installation Mode . Ensure that openshift-operators-redhat is selected under Installed Namespace . You must specify the openshift-operators-redhat namespace. The openshift-operators namespace might contain Community Operators, which are untrusted and could publish a metric with the same name as an OpenShift Dedicated metric, which would cause conflicts. Select Enable Operator recommended cluster monitoring on this namespace . This option sets the openshift.io/cluster-monitoring: "true" label in the Namespace object. You must select this option to ensure that cluster monitoring scrapes the openshift-operators-redhat namespace. Select stable-5.y as the Update Channel . Note The stable channel only provides updates to the most recent release of logging. To continue receiving updates for prior releases, you must change your subscription channel to stable-x.y , where x.y represents the major and minor version of logging you have installed. For example, stable-5.7 . Select an Approval Strategy . The Automatic strategy allows Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available. The Manual strategy requires a user with appropriate credentials to approve the Operator update. Click Install . Verify that the OpenShift Elasticsearch Operator installed by switching to the Operators Installed Operators page. Ensure that OpenShift Elasticsearch Operator is listed in all projects with a Status of Succeeded . Install the Red Hat OpenShift Logging Operator: In the OpenShift Dedicated web console, click Operators OperatorHub . Choose Red Hat OpenShift Logging from the list of available Operators, and click Install . Ensure that the A specific namespace on the cluster is selected under Installation Mode . Ensure that Operator recommended namespace is openshift-logging under Installed Namespace . Select Enable Operator recommended cluster monitoring on this namespace . This option sets the openshift.io/cluster-monitoring: "true" label in the Namespace object. You must select this option to ensure that cluster monitoring scrapes the openshift-logging namespace. Select stable-5.y as the Update Channel . Select an Approval Strategy . The Automatic strategy allows Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available. The Manual strategy requires a user with appropriate credentials to approve the Operator update. Click Install . Verify that the Red Hat OpenShift Logging Operator installed by switching to the Operators Installed Operators page. Ensure that Red Hat OpenShift Logging is listed in the openshift-logging project with a Status of Succeeded . If the Operator does not appear as installed, to troubleshoot further: Switch to the Operators Installed Operators page and inspect the Status column for any errors or failures. Switch to the Workloads Pods page and check the logs in any pods in the openshift-logging project that are reporting issues. Create an OpenShift Logging instance: Switch to the Administration Custom Resource Definitions page. On the Custom Resource Definitions page, click ClusterLogging . On the Custom Resource Definition details page, select View Instances from the Actions menu. On the ClusterLoggings page, click Create ClusterLogging . You might have to refresh the page to load the data. In the YAML field, replace the code with the following: Note This default OpenShift Logging configuration should support a wide array of environments. Review the topics on tuning and configuring logging components for information on modifications you can make to your OpenShift Logging cluster. apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance 1 namespace: openshift-logging spec: managementState: Managed 2 logStore: type: elasticsearch 3 retentionPolicy: 4 application: maxAge: 1d infra: maxAge: 7d audit: maxAge: 7d elasticsearch: nodeCount: 3 5 storage: storageClassName: <storage_class_name> 6 size: 200G resources: 7 limits: memory: 16Gi requests: memory: 16Gi proxy: 8 resources: limits: memory: 256Mi requests: memory: 256Mi redundancyPolicy: SingleRedundancy visualization: type: kibana 9 kibana: replicas: 1 collection: type: fluentd 10 fluentd: {} 1 The name must be instance . 2 The OpenShift Logging management state. In some cases, if you change the OpenShift Logging defaults, you must set this to Unmanaged . However, an unmanaged deployment does not receive updates until OpenShift Logging is placed back into a managed state. 3 Settings for configuring Elasticsearch. Using the CR, you can configure shard replication policy and persistent storage. 4 Specify the length of time that Elasticsearch should retain each log source. Enter an integer and a time designation: weeks(w), hours(h/H), minutes(m) and seconds(s). For example, 7d for seven days. Logs older than the maxAge are deleted. You must specify a retention policy for each log source or the Elasticsearch indices will not be created for that source. 5 Specify the number of Elasticsearch nodes. See the note that follows this list. 6 Enter the name of an existing storage class for Elasticsearch storage. For best performance, specify a storage class that allocates block storage. If you do not specify a storage class, OpenShift Logging uses ephemeral storage. 7 Specify the CPU and memory requests for Elasticsearch as needed. If you leave these values blank, the OpenShift Elasticsearch Operator sets default values that should be sufficient for most deployments. The default values are 16Gi for the memory request and 1 for the CPU request. 8 Specify the CPU and memory requests for the Elasticsearch proxy as needed. If you leave these values blank, the OpenShift Elasticsearch Operator sets default values that should be sufficient for most deployments. The default values are 256Mi for the memory request and 100m for the CPU request. 9 Settings for configuring Kibana. Using the CR, you can scale Kibana for redundancy and configure the CPU and memory for your Kibana nodes. For more information, see Configuring the log visualizer . 10 Settings for configuring Fluentd. Using the CR, you can configure Fluentd CPU and memory limits. For more information, see "Configuring Fluentd". Note The maximum number of master nodes is three. If you specify a nodeCount greater than 3 , OpenShift Dedicated creates three Elasticsearch nodes that are Master-eligible nodes, with the master, client, and data roles. The additional Elasticsearch nodes are created as Data-only nodes, using client and data roles. Master nodes perform cluster-wide actions such as creating or deleting an index, shard allocation, and tracking nodes. Data nodes hold the shards and perform data-related operations such as CRUD, search, and aggregations. Data-related operations are I/O-, memory-, and CPU-intensive. It is important to monitor these resources and to add more Data nodes if the current nodes are overloaded. For example, if nodeCount=4 , the following nodes are created: USD oc get deployment Example output cluster-logging-operator-66f77ffccb-ppzbg 1/1 Running 0 7m elasticsearch-cd-tuhduuw-1-f5c885dbf-dlqws 1/1 Running 0 2m4s elasticsearch-cdm-ftuhduuw-1-ffc4b9566-q6bhp 2/2 Running 0 2m40s elasticsearch-cdm-ftuhduuw-2-7b4994dbfc-rd2gc 2/2 Running 0 2m36s elasticsearch-cdm-ftuhduuw-3-84b5ff7ff8-gqnm2 2/2 Running 0 2m4s Click Create . This creates the logging components, the Elasticsearch custom resource and components, and the Kibana interface. Verify the install: Switch to the Workloads Pods page. Select the openshift-logging project. You should see several pods for OpenShift Logging, Elasticsearch, your collector, and Kibana similar to the following list: Example output cluster-logging-operator-66f77ffccb-ppzbg 1/1 Running 0 7m elasticsearch-cdm-ftuhduuw-1-ffc4b9566-q6bhp 2/2 Running 0 2m40s elasticsearch-cdm-ftuhduuw-2-7b4994dbfc-rd2gc 2/2 Running 0 2m36s elasticsearch-cdm-ftuhduuw-3-84b5ff7ff8-gqnm2 2/2 Running 0 2m4s collector-587vb 1/1 Running 0 2m26s collector-7mpb9 1/1 Running 0 2m30s collector-flm6j 1/1 Running 0 2m33s collector-gn4rn 1/1 Running 0 2m26s collector-nlgb6 1/1 Running 0 2m30s collector-snpkt 1/1 Running 0 2m28s kibana-d6d5668c5-rppqm 2/2 Running 0 2m39s 5.2. Installing Logging with Elasticsearch using the CLI Elasticsearch is a memory-intensive application. By default, OpenShift Dedicated installs three Elasticsearch nodes with memory requests and limits of 16 GB. This initial set of three OpenShift Dedicated nodes might not have enough memory to run Elasticsearch within your cluster. If you experience memory issues that are related to Elasticsearch, add more Elasticsearch nodes to your cluster rather than increasing the memory on existing nodes. Prerequisites Ensure that you have the necessary persistent storage for Elasticsearch. Note that each Elasticsearch node requires its own storage volume. Note If you use a local volume for persistent storage, do not use a raw block volume, which is described with volumeMode: block in the LocalVolume object. Elasticsearch cannot use raw block volumes. Procedure Create a Namespace object for the OpenShift Elasticsearch Operator: Example Namespace object apiVersion: v1 kind: Namespace metadata: name: openshift-operators-redhat 1 annotations: openshift.io/node-selector: "" labels: openshift.io/cluster-monitoring: "true" 2 1 You must specify the openshift-operators-redhat namespace. The openshift-operators namespace might contain Community Operators, which are untrusted and could publish a metric with the same name as an OpenShift Dedicated metric, which would cause conflicts. 2 A string value that specifies the label as shown to ensure that cluster monitoring scrapes the openshift-operators-redhat namespace. Apply the Namespace object by running the following command: USD oc apply -f <filename>.yaml Create a Namespace object for the Red Hat OpenShift Logging Operator: Example Namespace object apiVersion: v1 kind: Namespace metadata: name: openshift-logging 1 annotations: openshift.io/node-selector: "" labels: openshift.io/cluster-monitoring: "true" 1 You must specify openshift-logging as the namespace for logging versions 5.7 and earlier. For logging 5.8 and later, you can use any namespace. Apply the Namespace object by running the following command: USD oc apply -f <filename>.yaml Create an OperatorGroup object for the OpenShift Elasticsearch Operator: Example OperatorGroup object apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-operators-redhat namespace: openshift-operators-redhat 1 spec: {} 1 You must specify the openshift-operators-redhat namespace. Apply the OperatorGroup object by running the following command: USD oc apply -f <filename>.yaml Create a Subscription object to subscribe a namespace to the OpenShift Elasticsearch Operator: Note The stable channel only provides updates to the most recent release of logging. To continue receiving updates for prior releases, you must change your subscription channel to stable-x.y , where x.y represents the major and minor version of logging you have installed. For example, stable-5.7 . Example Subscription object apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: elasticsearch-operator namespace: openshift-operators-redhat 1 spec: channel: <channel> 2 installPlanApproval: Automatic 3 source: redhat-operators 4 sourceNamespace: openshift-marketplace name: elasticsearch-operator 1 You must specify the openshift-operators-redhat namespace. 2 Specify stable , or stable-<x.y> as the channel. 3 Automatic allows the Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available. Manual requires a user with appropriate credentials to approve the Operator update. 4 Specify redhat-operators . If your OpenShift Dedicated cluster is installed on a restricted network, also known as a disconnected cluster, specify the name of the CatalogSource object you created when you configured the Operator Lifecycle Manager (OLM) Apply the subscription by running the following command: USD oc apply -f <filename>.yaml Verify the Operator installation by running the following command: USD oc get csv --all-namespaces Example output NAMESPACE NAME DISPLAY VERSION REPLACES PHASE default elasticsearch-operator.v5.8.3 OpenShift Elasticsearch Operator 5.8.3 elasticsearch-operator.v5.8.2 Succeeded kube-node-lease elasticsearch-operator.v5.8.3 OpenShift Elasticsearch Operator 5.8.3 elasticsearch-operator.v5.8.2 Succeeded kube-public elasticsearch-operator.v5.8.3 OpenShift Elasticsearch Operator 5.8.3 elasticsearch-operator.v5.8.2 Succeeded kube-system elasticsearch-operator.v5.8.3 OpenShift Elasticsearch Operator 5.8.3 elasticsearch-operator.v5.8.2 Succeeded openshift-apiserver-operator elasticsearch-operator.v5.8.3 OpenShift Elasticsearch Operator 5.8.3 elasticsearch-operator.v5.8.2 Succeeded openshift-apiserver elasticsearch-operator.v5.8.3 OpenShift Elasticsearch Operator 5.8.3 elasticsearch-operator.v5.8.2 Succeeded openshift-authentication-operator elasticsearch-operator.v5.8.3 OpenShift Elasticsearch Operator 5.8.3 elasticsearch-operator.v5.8.2 Succeeded openshift-authentication elasticsearch-operator.v5.8.3 OpenShift Elasticsearch Operator 5.8.3 elasticsearch-operator.v5.8.2 Succeeded openshift-cloud-controller-manager-operator elasticsearch-operator.v5.8.3 OpenShift Elasticsearch Operator 5.8.3 elasticsearch-operator.v5.8.2 Succeeded openshift-cloud-controller-manager elasticsearch-operator.v5.8.3 OpenShift Elasticsearch Operator 5.8.3 elasticsearch-operator.v5.8.2 Succeeded openshift-cloud-credential-operator elasticsearch-operator.v5.8.3 OpenShift Elasticsearch Operator 5.8.3 elasticsearch-operator.v5.8.2 Succeeded Create an OperatorGroup object for the Red Hat OpenShift Logging Operator: Example OperatorGroup object apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-logging namespace: openshift-logging 1 spec: targetNamespaces: - openshift-logging 2 1 You must specify openshift-logging as the namespace for logging versions 5.7 and earlier. For logging 5.8 and later, you can use any namespace. 2 You must specify openshift-logging as the namespace for logging versions 5.7 and earlier. For logging 5.8 and later, you can use any namespace. Apply the OperatorGroup object by running the following command: USD oc apply -f <filename>.yaml Create a Subscription object to subscribe the namespace to the Red Hat OpenShift Logging Operator: Example Subscription object apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging 1 spec: channel: stable 2 name: cluster-logging source: redhat-operators 3 sourceNamespace: openshift-marketplace 1 You must specify the openshift-logging namespace for logging versions 5.7 and older. For logging 5.8 and later versions, you can use any namespace. 2 Specify stable or stable-x.y as the channel. 3 Specify redhat-operators . If your OpenShift Dedicated cluster is installed on a restricted network, also known as a disconnected cluster, specify the name of the CatalogSource object you created when you configured the Operator Lifecycle Manager (OLM). Apply the subscription object by running the following command: USD oc apply -f <filename>.yaml Create a ClusterLogging object as a YAML file: Example ClusterLogging object apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance 1 namespace: openshift-logging spec: managementState: Managed 2 logStore: type: elasticsearch 3 retentionPolicy: 4 application: maxAge: 1d infra: maxAge: 7d audit: maxAge: 7d elasticsearch: nodeCount: 3 5 storage: storageClassName: <storage_class_name> 6 size: 200G resources: 7 limits: memory: 16Gi requests: memory: 16Gi proxy: 8 resources: limits: memory: 256Mi requests: memory: 256Mi redundancyPolicy: SingleRedundancy visualization: type: kibana 9 kibana: replicas: 1 collection: type: fluentd 10 fluentd: {} 1 The name must be instance . 2 The OpenShift Logging management state. In some cases, if you change the OpenShift Logging defaults, you must set this to Unmanaged . However, an unmanaged deployment does not receive updates until OpenShift Logging is placed back into a managed state. 3 Settings for configuring Elasticsearch. Using the CR, you can configure shard replication policy and persistent storage. 4 Specify the length of time that Elasticsearch should retain each log source. Enter an integer and a time designation: weeks(w), hours(h/H), minutes(m) and seconds(s). For example, 7d for seven days. Logs older than the maxAge are deleted. You must specify a retention policy for each log source or the Elasticsearch indices will not be created for that source. 5 Specify the number of Elasticsearch nodes. 6 Enter the name of an existing storage class for Elasticsearch storage. For best performance, specify a storage class that allocates block storage. If you do not specify a storage class, OpenShift Logging uses ephemeral storage. 7 Specify the CPU and memory requests for Elasticsearch as needed. If you leave these values blank, the OpenShift Elasticsearch Operator sets default values that should be sufficient for most deployments. The default values are 16Gi for the memory request and 1 for the CPU request. 8 Specify the CPU and memory requests for the Elasticsearch proxy as needed. If you leave these values blank, the OpenShift Elasticsearch Operator sets default values that should be sufficient for most deployments. The default values are 256Mi for the memory request and 100m for the CPU request. 9 Settings for configuring Kibana. Using the CR, you can scale Kibana for redundancy and configure the CPU and memory for your Kibana nodes. 10 Settings for configuring Fluentd. Using the CR, you can configure Fluentd CPU and memory limits. Note The maximum number of master nodes is three. If you specify a nodeCount greater than 3 , OpenShift Dedicated creates three Elasticsearch nodes that are Master-eligible nodes, with the master, client, and data roles. The additional Elasticsearch nodes are created as Data-only nodes, using client and data roles. Master nodes perform cluster-wide actions such as creating or deleting an index, shard allocation, and tracking nodes. Data nodes hold the shards and perform data-related operations such as CRUD, search, and aggregations. Data-related operations are I/O-, memory-, and CPU-intensive. It is important to monitor these resources and to add more Data nodes if the current nodes are overloaded. For example, if nodeCount=4 , the following nodes are created: USD oc get deployment Example output cluster-logging-operator-66f77ffccb-ppzbg 1/1 Running 0 7m elasticsearch-cdm-ftuhduuw-1-ffc4b9566-q6bhp 2/2 Running 0 2m40s elasticsearch-cdm-ftuhduuw-2-7b4994dbfc-rd2gc 2/2 Running 0 2m36s elasticsearch-cdm-ftuhduuw-3-84b5ff7ff8-gqnm2 2/2 Running 0 2m4s Apply the ClusterLogging CR by running the following command: USD oc apply -f <filename>.yaml Verify the installation by running the following command: USD oc get pods -n openshift-logging Example output NAME READY STATUS RESTARTS AGE cluster-logging-operator-66f77ffccb-ppzbg 1/1 Running 0 7m elasticsearch-cdm-ftuhduuw-1-ffc4b9566-q6bhp 2/2 Running 0 2m40s elasticsearch-cdm-ftuhduuw-2-7b4994dbfc-rd2gc 2/2 Running 0 2m36s elasticsearch-cdm-ftuhduuw-3-84b5ff7ff8-gqnm2 2/2 Running 0 2m4s collector-587vb 1/1 Running 0 2m26s collector-7mpb9 1/1 Running 0 2m30s collector-flm6j 1/1 Running 0 2m33s collector-gn4rn 1/1 Running 0 2m26s collector-nlgb6 1/1 Running 0 2m30s collector-snpkt 1/1 Running 0 2m28s kibana-d6d5668c5-rppqm 2/2 Running 0 2m39s Important If there is no retention period defined on the s3 bucket or in the LokiStack custom resource (CR), then the logs are not pruned and they stay in the s3 bucket forever, which might fill up the s3 storage. 5.3. Installing Logging and the Loki Operator using the CLI To install and configure logging on your OpenShift Dedicated cluster, an Operator such as Loki Operator for log storage must be installed first. This can be done from the OpenShift Dedicated CLI. Prerequisites You have administrator permissions. You installed the OpenShift CLI ( oc ). You have access to a supported object store. For example: AWS S3, Google Cloud Storage, Azure, Swift, Minio, or OpenShift Data Foundation. Note The stable channel only provides updates to the most recent release of logging. To continue receiving updates for prior releases, you must change your subscription channel to stable-x.y , where x.y represents the major and minor version of logging you have installed. For example, stable-5.7 . Create a Namespace object for Loki Operator: Example Namespace object apiVersion: v1 kind: Namespace metadata: name: openshift-operators-redhat 1 annotations: openshift.io/node-selector: "" labels: openshift.io/cluster-monitoring: "true" 2 1 You must specify the openshift-operators-redhat namespace. To prevent possible conflicts with metrics, you should configure the Prometheus Cluster Monitoring stack to scrape metrics from the openshift-operators-redhat namespace and not the openshift-operators namespace. The openshift-operators namespace might contain community Operators, which are untrusted and could publish a metric with the same name as an OpenShift Dedicated metric, which would cause conflicts. 2 A string value that specifies the label as shown to ensure that cluster monitoring scrapes the openshift-operators-redhat namespace. Apply the Namespace object by running the following command: USD oc apply -f <filename>.yaml Create a Subscription object for Loki Operator: Example Subscription object apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat 1 spec: channel: stable 2 name: loki-operator source: redhat-operators 3 sourceNamespace: openshift-marketplace 1 You must specify the openshift-operators-redhat namespace. 2 Specify stable , or stable-5.<y> as the channel. 3 Specify redhat-operators . If your OpenShift Dedicated cluster is installed on a restricted network, also known as a disconnected cluster, specify the name of the CatalogSource object you created when you configured the Operator Lifecycle Manager (OLM). Apply the Subscription object by running the following command: USD oc apply -f <filename>.yaml Create a namespace object for the Red Hat OpenShift Logging Operator: Example namespace object apiVersion: v1 kind: Namespace metadata: name: openshift-logging 1 annotations: openshift.io/node-selector: "" labels: openshift.io/cluster-logging: "true" openshift.io/cluster-monitoring: "true" 2 1 The Red Hat OpenShift Logging Operator is only deployable to the openshift-logging namespace. 2 A string value that specifies the label as shown to ensure that cluster monitoring scrapes the openshift-operators-redhat namespace. Apply the namespace object by running the following command: USD oc apply -f <filename>.yaml Create an OperatorGroup object Example OperatorGroup object apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-logging namespace: openshift-logging 1 spec: targetNamespaces: - openshift-logging 1 You must specify the openshift-logging namespace. Apply the OperatorGroup object by running the following command: USD oc apply -f <filename>.yaml Create a Subscription object: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging 1 spec: channel: stable 2 name: cluster-logging source: redhat-operators 3 sourceNamespace: openshift-marketplace 1 You must specify the openshift-logging namespace. 2 Specify stable , or stable-5.<y> as the channel. 3 Specify redhat-operators . If your OpenShift Dedicated cluster is installed on a restricted network, also known as a disconnected cluster, specify the name of the CatalogSource object you created when you configured the Operator Lifecycle Manager (OLM). Apply the Subscription object by running the following command: USD oc apply -f <filename>.yaml Create a LokiStack CR: Example LokiStack CR apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki 1 namespace: openshift-logging 2 spec: size: 1x.small 3 storage: schemas: - version: v12 effectiveDate: "2022-06-01" secret: name: logging-loki-s3 4 type: s3 5 credentialMode: 6 storageClassName: <storage_class_name> 7 tenants: mode: openshift-logging 8 1 Use the name logging-loki . 2 You must specify the openshift-logging namespace. 3 Specify the deployment size. In the logging 5.8 and later versions, the supported size options for production instances of Loki are 1x.extra-small , 1x.small , or 1x.medium . 4 Specify the name of your log store secret. 5 Specify the corresponding storage type. 6 Optional field, logging 5.9 and later. Supported user configured values are as follows: static is the default authentication mode available for all supported object storage types using credentials stored in a Secret. token for short-lived tokens retrieved from a credential source. In this mode the static configuration does not contain credentials needed for the object storage. Instead, they are generated during runtime using a service, which allows for shorter-lived credentials and much more granular control. This authentication mode is not supported for all object storage types. token-cco is the default value when Loki is running on managed STS mode and using CCO on STS/WIF clusters. 7 Specify the name of a storage class for temporary storage. For best performance, specify a storage class that allocates block storage. Available storage classes for your cluster can be listed by using the oc get storageclasses command. 8 LokiStack defaults to running in multi-tenant mode, which cannot be modified. One tenant is provided for each log type: audit, infrastructure, and application logs. This enables access control for individual users and user groups to different log streams. Apply the LokiStack CR object by running the following command: USD oc apply -f <filename>.yaml Create a ClusterLogging CR object: Example ClusterLogging CR object apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance 1 namespace: openshift-logging 2 spec: collection: type: vector logStore: lokistack: name: logging-loki retentionPolicy: application: maxAge: 7d audit: maxAge: 7d infra: maxAge: 7d type: lokistack visualization: ocpConsole: logsLimit: 15 managementState: Managed 1 Name must be instance . 2 Namespace must be openshift-logging . Apply the ClusterLogging CR object by running the following command: USD oc apply -f <filename>.yaml Verify the installation by running the following command: USD oc get pods -n openshift-logging Example output USD oc get pods -n openshift-logging NAME READY STATUS RESTARTS AGE cluster-logging-operator-fb7f7cf69-8jsbq 1/1 Running 0 98m collector-222js 2/2 Running 0 18m collector-g9ddv 2/2 Running 0 18m collector-hfqq8 2/2 Running 0 18m collector-sphwg 2/2 Running 0 18m collector-vv7zn 2/2 Running 0 18m collector-wk5zz 2/2 Running 0 18m logging-view-plugin-6f76fbb78f-n2n4n 1/1 Running 0 18m lokistack-sample-compactor-0 1/1 Running 0 42m lokistack-sample-distributor-7d7688bcb9-dvcj8 1/1 Running 0 42m lokistack-sample-gateway-5f6c75f879-bl7k9 2/2 Running 0 42m lokistack-sample-gateway-5f6c75f879-xhq98 2/2 Running 0 42m lokistack-sample-index-gateway-0 1/1 Running 0 42m lokistack-sample-ingester-0 1/1 Running 0 42m lokistack-sample-querier-6b7b56bccc-2v9q4 1/1 Running 0 42m lokistack-sample-query-frontend-84fb57c578-gq2f7 1/1 Running 0 42m 5.4. Installing Logging and the Loki Operator using the web console To install and configure logging on your OpenShift Dedicated cluster, an Operator such as Loki Operator for log storage must be installed first. This can be done from the OperatorHub within the web console. Prerequisites You have access to a supported object store (AWS S3, Google Cloud Storage, Azure, Swift, Minio, OpenShift Data Foundation). You have administrator permissions. You have access to the OpenShift Dedicated web console. Procedure In the OpenShift Dedicated web console Administrator perspective, go to Operators OperatorHub . Type Loki Operator in the Filter by keyword field. Click Loki Operator in the list of available Operators, and then click Install . Important The Community Loki Operator is not supported by Red Hat. Select stable or stable-x.y as the Update channel . Note The stable channel only provides updates to the most recent release of logging. To continue receiving updates for prior releases, you must change your subscription channel to stable-x.y , where x.y represents the major and minor version of logging you have installed. For example, stable-5.7 . The Loki Operator must be deployed to the global operator group namespace openshift-operators-redhat , so the Installation mode and Installed Namespace are already selected. If this namespace does not already exist, it is created for you. Select Enable Operator-recommended cluster monitoring on this namespace. This option sets the openshift.io/cluster-monitoring: "true" label in the Namespace object. You must select this option to ensure that cluster monitoring scrapes the openshift-operators-redhat namespace. For Update approval select Automatic , then click Install . If the approval strategy in the subscription is set to Automatic , the update process initiates as soon as a new Operator version is available in the selected channel. If the approval strategy is set to Manual , you must manually approve pending updates. Install the Red Hat OpenShift Logging Operator: In the OpenShift Dedicated web console, click Operators OperatorHub . Choose Red Hat OpenShift Logging from the list of available Operators, and click Install . Ensure that the A specific namespace on the cluster is selected under Installation Mode . Ensure that Operator recommended namespace is openshift-logging under Installed Namespace . Select Enable Operator recommended cluster monitoring on this namespace . This option sets the openshift.io/cluster-monitoring: "true" label in the Namespace object. You must select this option to ensure that cluster monitoring scrapes the openshift-logging namespace. Select stable-5.y as the Update Channel . Select an Approval Strategy . The Automatic strategy allows Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available. The Manual strategy requires a user with appropriate credentials to approve the Operator update. Click Install . Go to the Operators Installed Operators page. Click the All instances tab. From the Create new drop-down list, select LokiStack . Select YAML view , and then use the following template to create a LokiStack CR: Example LokiStack CR apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki 1 namespace: openshift-logging 2 spec: size: 1x.small 3 storage: schemas: - version: v12 effectiveDate: "2022-06-01" secret: name: logging-loki-s3 4 type: s3 5 credentialMode: 6 storageClassName: <storage_class_name> 7 tenants: mode: openshift-logging 8 1 Use the name logging-loki . 2 You must specify the openshift-logging namespace. 3 Specify the deployment size. In the logging 5.8 and later versions, the supported size options for production instances of Loki are 1x.extra-small , 1x.small , or 1x.medium . 4 Specify the name of your log store secret. 5 Specify the corresponding storage type. 6 Optional field, logging 5.9 and later. Supported user configured values are as follows: static is the default authentication mode available for all supported object storage types using credentials stored in a Secret. token for short-lived tokens retrieved from a credential source. In this mode the static configuration does not contain credentials needed for the object storage. Instead, they are generated during runtime using a service, which allows for shorter-lived credentials and much more granular control. This authentication mode is not supported for all object storage types. token-cco is the default value when Loki is running on managed STS mode and using CCO on STS/WIF clusters. 7 Specify the name of a storage class for temporary storage. For best performance, specify a storage class that allocates block storage. Available storage classes for your cluster can be listed by using the oc get storageclasses command. 8 LokiStack defaults to running in multi-tenant mode, which cannot be modified. One tenant is provided for each log type: audit, infrastructure, and application logs. This enables access control for individual users and user groups to different log streams. Important It is not possible to change the number 1x for the deployment size. Click Create . Create an OpenShift Logging instance: Switch to the Administration Custom Resource Definitions page. On the Custom Resource Definitions page, click ClusterLogging . On the Custom Resource Definition details page, select View Instances from the Actions menu. On the ClusterLoggings page, click Create ClusterLogging . You might have to refresh the page to load the data. In the YAML field, replace the code with the following: apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance 1 namespace: openshift-logging 2 spec: collection: type: vector logStore: lokistack: name: logging-loki retentionPolicy: application: maxAge: 7d audit: maxAge: 7d infra: maxAge: 7d type: lokistack visualization: type: ocp-console ocpConsole: logsLimit: 15 managementState: Managed 1 Name must be instance . 2 Namespace must be openshift-logging . Verification Go to Operators Installed Operators . Make sure the openshift-logging project is selected. In the Status column, verify that you see green checkmarks with InstallSucceeded and the text Up to date . Note An Operator might display a Failed status before the installation finishes. If the Operator install completes with an InstallSucceeded message, refresh the page. Additional resources About the OpenShift SDN default CNI network provider About the OVN-Kubernetes default Container Network Interface (CNI) network provider
|
[
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance 1 namespace: openshift-logging spec: managementState: Managed 2 logStore: type: elasticsearch 3 retentionPolicy: 4 application: maxAge: 1d infra: maxAge: 7d audit: maxAge: 7d elasticsearch: nodeCount: 3 5 storage: storageClassName: <storage_class_name> 6 size: 200G resources: 7 limits: memory: 16Gi requests: memory: 16Gi proxy: 8 resources: limits: memory: 256Mi requests: memory: 256Mi redundancyPolicy: SingleRedundancy visualization: type: kibana 9 kibana: replicas: 1 collection: type: fluentd 10 fluentd: {}",
"oc get deployment",
"cluster-logging-operator-66f77ffccb-ppzbg 1/1 Running 0 7m elasticsearch-cd-tuhduuw-1-f5c885dbf-dlqws 1/1 Running 0 2m4s elasticsearch-cdm-ftuhduuw-1-ffc4b9566-q6bhp 2/2 Running 0 2m40s elasticsearch-cdm-ftuhduuw-2-7b4994dbfc-rd2gc 2/2 Running 0 2m36s elasticsearch-cdm-ftuhduuw-3-84b5ff7ff8-gqnm2 2/2 Running 0 2m4s",
"cluster-logging-operator-66f77ffccb-ppzbg 1/1 Running 0 7m elasticsearch-cdm-ftuhduuw-1-ffc4b9566-q6bhp 2/2 Running 0 2m40s elasticsearch-cdm-ftuhduuw-2-7b4994dbfc-rd2gc 2/2 Running 0 2m36s elasticsearch-cdm-ftuhduuw-3-84b5ff7ff8-gqnm2 2/2 Running 0 2m4s collector-587vb 1/1 Running 0 2m26s collector-7mpb9 1/1 Running 0 2m30s collector-flm6j 1/1 Running 0 2m33s collector-gn4rn 1/1 Running 0 2m26s collector-nlgb6 1/1 Running 0 2m30s collector-snpkt 1/1 Running 0 2m28s kibana-d6d5668c5-rppqm 2/2 Running 0 2m39s",
"apiVersion: v1 kind: Namespace metadata: name: openshift-operators-redhat 1 annotations: openshift.io/node-selector: \"\" labels: openshift.io/cluster-monitoring: \"true\" 2",
"oc apply -f <filename>.yaml",
"apiVersion: v1 kind: Namespace metadata: name: openshift-logging 1 annotations: openshift.io/node-selector: \"\" labels: openshift.io/cluster-monitoring: \"true\"",
"oc apply -f <filename>.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-operators-redhat namespace: openshift-operators-redhat 1 spec: {}",
"oc apply -f <filename>.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: elasticsearch-operator namespace: openshift-operators-redhat 1 spec: channel: <channel> 2 installPlanApproval: Automatic 3 source: redhat-operators 4 sourceNamespace: openshift-marketplace name: elasticsearch-operator",
"oc apply -f <filename>.yaml",
"oc get csv --all-namespaces",
"NAMESPACE NAME DISPLAY VERSION REPLACES PHASE default elasticsearch-operator.v5.8.3 OpenShift Elasticsearch Operator 5.8.3 elasticsearch-operator.v5.8.2 Succeeded kube-node-lease elasticsearch-operator.v5.8.3 OpenShift Elasticsearch Operator 5.8.3 elasticsearch-operator.v5.8.2 Succeeded kube-public elasticsearch-operator.v5.8.3 OpenShift Elasticsearch Operator 5.8.3 elasticsearch-operator.v5.8.2 Succeeded kube-system elasticsearch-operator.v5.8.3 OpenShift Elasticsearch Operator 5.8.3 elasticsearch-operator.v5.8.2 Succeeded openshift-apiserver-operator elasticsearch-operator.v5.8.3 OpenShift Elasticsearch Operator 5.8.3 elasticsearch-operator.v5.8.2 Succeeded openshift-apiserver elasticsearch-operator.v5.8.3 OpenShift Elasticsearch Operator 5.8.3 elasticsearch-operator.v5.8.2 Succeeded openshift-authentication-operator elasticsearch-operator.v5.8.3 OpenShift Elasticsearch Operator 5.8.3 elasticsearch-operator.v5.8.2 Succeeded openshift-authentication elasticsearch-operator.v5.8.3 OpenShift Elasticsearch Operator 5.8.3 elasticsearch-operator.v5.8.2 Succeeded openshift-cloud-controller-manager-operator elasticsearch-operator.v5.8.3 OpenShift Elasticsearch Operator 5.8.3 elasticsearch-operator.v5.8.2 Succeeded openshift-cloud-controller-manager elasticsearch-operator.v5.8.3 OpenShift Elasticsearch Operator 5.8.3 elasticsearch-operator.v5.8.2 Succeeded openshift-cloud-credential-operator elasticsearch-operator.v5.8.3 OpenShift Elasticsearch Operator 5.8.3 elasticsearch-operator.v5.8.2 Succeeded",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-logging namespace: openshift-logging 1 spec: targetNamespaces: - openshift-logging 2",
"oc apply -f <filename>.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging 1 spec: channel: stable 2 name: cluster-logging source: redhat-operators 3 sourceNamespace: openshift-marketplace",
"oc apply -f <filename>.yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance 1 namespace: openshift-logging spec: managementState: Managed 2 logStore: type: elasticsearch 3 retentionPolicy: 4 application: maxAge: 1d infra: maxAge: 7d audit: maxAge: 7d elasticsearch: nodeCount: 3 5 storage: storageClassName: <storage_class_name> 6 size: 200G resources: 7 limits: memory: 16Gi requests: memory: 16Gi proxy: 8 resources: limits: memory: 256Mi requests: memory: 256Mi redundancyPolicy: SingleRedundancy visualization: type: kibana 9 kibana: replicas: 1 collection: type: fluentd 10 fluentd: {}",
"oc get deployment",
"cluster-logging-operator-66f77ffccb-ppzbg 1/1 Running 0 7m elasticsearch-cdm-ftuhduuw-1-ffc4b9566-q6bhp 2/2 Running 0 2m40s elasticsearch-cdm-ftuhduuw-2-7b4994dbfc-rd2gc 2/2 Running 0 2m36s elasticsearch-cdm-ftuhduuw-3-84b5ff7ff8-gqnm2 2/2 Running 0 2m4s",
"oc apply -f <filename>.yaml",
"oc get pods -n openshift-logging",
"NAME READY STATUS RESTARTS AGE cluster-logging-operator-66f77ffccb-ppzbg 1/1 Running 0 7m elasticsearch-cdm-ftuhduuw-1-ffc4b9566-q6bhp 2/2 Running 0 2m40s elasticsearch-cdm-ftuhduuw-2-7b4994dbfc-rd2gc 2/2 Running 0 2m36s elasticsearch-cdm-ftuhduuw-3-84b5ff7ff8-gqnm2 2/2 Running 0 2m4s collector-587vb 1/1 Running 0 2m26s collector-7mpb9 1/1 Running 0 2m30s collector-flm6j 1/1 Running 0 2m33s collector-gn4rn 1/1 Running 0 2m26s collector-nlgb6 1/1 Running 0 2m30s collector-snpkt 1/1 Running 0 2m28s kibana-d6d5668c5-rppqm 2/2 Running 0 2m39s",
"apiVersion: v1 kind: Namespace metadata: name: openshift-operators-redhat 1 annotations: openshift.io/node-selector: \"\" labels: openshift.io/cluster-monitoring: \"true\" 2",
"oc apply -f <filename>.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat 1 spec: channel: stable 2 name: loki-operator source: redhat-operators 3 sourceNamespace: openshift-marketplace",
"oc apply -f <filename>.yaml",
"apiVersion: v1 kind: Namespace metadata: name: openshift-logging 1 annotations: openshift.io/node-selector: \"\" labels: openshift.io/cluster-logging: \"true\" openshift.io/cluster-monitoring: \"true\" 2",
"oc apply -f <filename>.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-logging namespace: openshift-logging 1 spec: targetNamespaces: - openshift-logging",
"oc apply -f <filename>.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging 1 spec: channel: stable 2 name: cluster-logging source: redhat-operators 3 sourceNamespace: openshift-marketplace",
"oc apply -f <filename>.yaml",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki 1 namespace: openshift-logging 2 spec: size: 1x.small 3 storage: schemas: - version: v12 effectiveDate: \"2022-06-01\" secret: name: logging-loki-s3 4 type: s3 5 credentialMode: 6 storageClassName: <storage_class_name> 7 tenants: mode: openshift-logging 8",
"oc apply -f <filename>.yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance 1 namespace: openshift-logging 2 spec: collection: type: vector logStore: lokistack: name: logging-loki retentionPolicy: application: maxAge: 7d audit: maxAge: 7d infra: maxAge: 7d type: lokistack visualization: ocpConsole: logsLimit: 15 managementState: Managed",
"oc apply -f <filename>.yaml",
"oc get pods -n openshift-logging",
"oc get pods -n openshift-logging NAME READY STATUS RESTARTS AGE cluster-logging-operator-fb7f7cf69-8jsbq 1/1 Running 0 98m collector-222js 2/2 Running 0 18m collector-g9ddv 2/2 Running 0 18m collector-hfqq8 2/2 Running 0 18m collector-sphwg 2/2 Running 0 18m collector-vv7zn 2/2 Running 0 18m collector-wk5zz 2/2 Running 0 18m logging-view-plugin-6f76fbb78f-n2n4n 1/1 Running 0 18m lokistack-sample-compactor-0 1/1 Running 0 42m lokistack-sample-distributor-7d7688bcb9-dvcj8 1/1 Running 0 42m lokistack-sample-gateway-5f6c75f879-bl7k9 2/2 Running 0 42m lokistack-sample-gateway-5f6c75f879-xhq98 2/2 Running 0 42m lokistack-sample-index-gateway-0 1/1 Running 0 42m lokistack-sample-ingester-0 1/1 Running 0 42m lokistack-sample-querier-6b7b56bccc-2v9q4 1/1 Running 0 42m lokistack-sample-query-frontend-84fb57c578-gq2f7 1/1 Running 0 42m",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki 1 namespace: openshift-logging 2 spec: size: 1x.small 3 storage: schemas: - version: v12 effectiveDate: \"2022-06-01\" secret: name: logging-loki-s3 4 type: s3 5 credentialMode: 6 storageClassName: <storage_class_name> 7 tenants: mode: openshift-logging 8",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance 1 namespace: openshift-logging 2 spec: collection: type: vector logStore: lokistack: name: logging-loki retentionPolicy: application: maxAge: 7d audit: maxAge: 7d infra: maxAge: 7d type: lokistack visualization: type: ocp-console ocpConsole: logsLimit: 15 managementState: Managed"
] |
https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/logging/cluster-logging-deploying
|
Backup and restore
|
Backup and restore OpenShift Container Platform 4.15 Backing up and restoring your OpenShift Container Platform cluster Red Hat OpenShift Documentation Team
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/backup_and_restore/index
|
Chapter 9. Setting up to Develop Containerized Applications
|
Chapter 9. Setting up to Develop Containerized Applications Red Hat supports the development of containerized applications based on Red Hat Enterprise Linux, Red Hat OpenShift , and a number of other Red Hat products. Red Hat Container Development Kit (CDK) provides a Red Hat Enterprise Linux virtual machine that runs a single-node Red Hat OpenShift 3 cluster. It does not support OpenShift 4. Follow the instructions in the Red Hat Container Development Kit Getting Started Guide, Chapter 1.4., Installing CDK . Red Hat CodeReady Containers (CRC) brings a minimal OpenShift 4 cluster to your local computer, providing a minimal environment for development and testing purposes. CodeReady Containers is mainly targeted at running on developers' desktops. Red Hat Development Suite provides Red Hat tools for the development of containerized applications in Java, C, and C++. It consists of Red Hat JBoss Developer Studio , OpenJDK , Red Hat Container Development Kit , and other minor components. To install DevSuite , follow the instructions in the Red Hat Development Suite Installation Guide . .NET Core 3.1 is a general-purpose development platform for building high-quality applications that run on the OpenShift Container Platform versions 3.3 and later. For installation and usage instructions, see the .NET Core Getting Started Guide Chapter 2., Using .NET Core 3.1 on Red Hat OpenShift Container Platform . Additional Resources Red Hat CodeReady Studio - Getting Started with Container and Cloud-based Development Product Documentation for Red Hat Container Development Kit Product Documentation for OpenShift Container Platform Red Hat Enterprise Linux Atomic Host - Overview of Containers in Red Hat Systems
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/developer_guide/setting-up_setup-developing-containers
|
Chapter 6. Preparing a UEFI HTTP installation source
|
Chapter 6. Preparing a UEFI HTTP installation source As an administrator of a server on a local network, you can configure an HTTP server to enable HTTP boot and network installation for other systems on your network. 6.1. Network install overview A network installation allows you to install Red Hat Enterprise Linux to a system that has access to an installation server. At a minimum, two systems are required for a network installation: Server A system running a DHCP server, an HTTP, HTTPS, FTP, or NFS server, and in the PXE boot case, a TFTP server. Although each server can run on a different physical system, the procedures in this section assume a single system is running all servers. Client The system to which you are installing Red Hat Enterprise Linux. Once installation starts, the client queries the DHCP server, receives the boot files from the HTTP or TFTP server, and downloads the installation image from the HTTP, HTTPS, FTP or NFS server. Unlike other installation methods, the client does not require any physical boot media for the installation to start. To boot a client from the network, enable network boot in the firmware or in a quick boot menu on the client. On some hardware, the option to boot from a network might be disabled, or not available. The workflow steps to prepare to install Red Hat Enterprise Linux from a network using HTTP or PXE are as follows: Procedure Export the installation ISO image or the installation tree to an NFS, HTTPS, HTTP, or FTP server. Configure the HTTP or TFTP server and DHCP server, and start the HTTP or TFTP service on the server. Boot the client and start the installation. You can choose between the following network boot protocols: HTTP Red Hat recommends using HTTP boot if your client UEFI supports it. HTTP boot is usually more reliable. PXE (TFTP) PXE boot is more widely supported by client systems, but sending the boot files over this protocol might be slow and result in timeout failures. Additional resources Red Hat Satellite product documentation 6.2. Configuring the DHCPv4 server for network boot Enable the DHCP version 4 (DHCPv4) service on your server, so that it can provide network boot functionality. Prerequisites You are preparing network installation over the IPv4 protocol. For IPv6, see Configuring the DHCPv6 server for network boot instead. Find the network addresses of the server. In the following examples, the server has a network card with this configuration: IPv4 address 192.168.124.2/24 IPv4 gateway 192.168.124.1 Procedure Install the DHCP server: Set up a DHCPv4 server. Enter the following configuration in the /etc/dhcp/dhcpd.conf file. Replace the addresses to match your network card. Start the DHCPv4 service: 6.3. Configuring the DHCPv6 server for network boot Enable the DHCP version 6 (DHCPv4) service on your server, so that it can provide network boot functionality. Prerequisites You are preparing network installation over the IPv6 protocol. For IPv4, see Configuring the DHCPv4 server for network boot instead. Find the network addresses of the server. In the following examples, the server has a network card with this configuration: IPv6 address fd33:eb1b:9b36::2/64 IPv6 gateway fd33:eb1b:9b36::1 Procedure Install the DHCP server: Set up a DHCPv6 server. Enter the following configuration in the /etc/dhcp/dhcpd6.conf file. Replace the addresses to match your network card. Start the DHCPv6 service: If DHCPv6 packets are dropped by the RP filter in the firewall, check its log. If the log contains the rpfilter_DROP entry, disable the filter using the following configuration in the /etc/firewalld/firewalld.conf file: 6.4. Configuring the HTTP server for HTTP boot You must install and enable the httpd service on your server so that the server can provide HTTP boot resources on your network. Prerequisites Find the network addresses of the server. In the following examples, the server has a network card with the 192.168.124.2 IPv4 address. Procedure Install the HTTP server: Create the /var/www/html/redhat/ directory: Download the RHEL DVD ISO file. See All Red Hat Enterprise Linux Downloads . Create a mount point for the ISO file: Mount the ISO file: Copy the boot loader, kernel, and initramfs from the mounted ISO file into your HTML directory: Make the boot loader configuration editable: Edit the /var/www/html/redhat/EFI/BOOT/grub.cfg file and replace its content with the following: In this file, replace the following strings: RHEL-9-3-0-BaseOS-x86_64 and Red Hat Enterprise Linux 9.3 Edit the version number to match the version of RHEL that you downloaded. 192.168.124.2 Replace with the IP address to your server. Make the EFI boot file executable: Open ports in the firewall to allow HTTP (80), DHCP (67, 68) and DHCPv6 (546, 547) traffic: This command enables temporary access until the server reboot. Optional: To enable permanent access, add the --permanent option to the command. Reload firewall rules: Start the HTTP server: Make the html directory and its content readable and executable: Restore the SELinux context of the html directory:
|
[
"dnf install dhcp-server",
"option architecture-type code 93 = unsigned integer 16; subnet 192.168.124.0 netmask 255.255.255.0 { option routers 192.168.124.1 ; option domain-name-servers 192.168.124.1 ; range 192.168.124.100 192.168.124.200 ; class \"pxeclients\" { match if substring (option vendor-class-identifier, 0, 9) = \"PXEClient\"; next-server 192.168.124.2 ; if option architecture-type = 00:07 { filename \"redhat/EFI/BOOT/BOOTX64.EFI\"; } else { filename \"pxelinux/pxelinux.0\"; } } class \"httpclients\" { match if substring (option vendor-class-identifier, 0, 10) = \"HTTPClient\"; option vendor-class-identifier \"HTTPClient\"; filename \"http:// 192.168.124.2 /redhat/EFI/BOOT/BOOTX64.EFI\"; } }",
"systemctl enable --now dhcpd",
"dnf install dhcp-server",
"option dhcp6.bootfile-url code 59 = string; option dhcp6.vendor-class code 16 = {integer 32, integer 16, string}; subnet6 fd33:eb1b:9b36::/64 { range6 fd33:eb1b:9b36::64 fd33:eb1b:9b36::c8 ; class \"PXEClient\" { match substring (option dhcp6.vendor-class, 6, 9); } subclass \"PXEClient\" \"PXEClient\" { option dhcp6.bootfile-url \"tftp:// [fd33:eb1b:9b36::2] /redhat/EFI/BOOT/BOOTX64.EFI\"; } class \"HTTPClient\" { match substring (option dhcp6.vendor-class, 6, 10); } subclass \"HTTPClient\" \"HTTPClient\" { option dhcp6.bootfile-url \"http:// [fd33:eb1b:9b36::2] /redhat/EFI/BOOT/BOOTX64.EFI\"; option dhcp6.vendor-class 0 10 \"HTTPClient\"; } }",
"systemctl enable --now dhcpd6",
"IPv6_rpfilter=no",
"dnf install httpd",
"mkdir -p /var/www/html/redhat/",
"mkdir -p /var/www/html/redhat/iso/",
"mount -o loop,ro -t iso9660 path-to-RHEL-DVD.iso /var/www/html/redhat/iso",
"cp -r /var/www/html/redhat/iso/images /var/www/html/redhat/ cp -r /var/www/html/redhat/iso/EFI /var/www/html/redhat/",
"chmod 644 /var/www/html/redhat/EFI/BOOT/grub.cfg",
"set default=\"1\" function load_video { insmod efi_gop insmod efi_uga insmod video_bochs insmod video_cirrus insmod all_video } load_video set gfxpayload=keep insmod gzio insmod part_gpt insmod ext2 set timeout=60 # END /etc/grub.d/00_header # search --no-floppy --set=root -l ' RHEL-9-3-0-BaseOS-x86_64 ' # BEGIN /etc/grub.d/10_linux # menuentry 'Install Red Hat Enterprise Linux 9.3 ' --class fedora --class gnu-linux --class gnu --class os { linuxefi ../../images/pxeboot/vmlinuz inst.repo=http:// 192.168.124.2 /redhat/iso quiet initrdefi ../../images/pxeboot/initrd.img } menuentry 'Test this media & install Red Hat Enterprise Linux 9.3 ' --class fedora --class gnu-linux --class gnu --class os { linuxefi ../../images/pxeboot/vmlinuz inst.repo=http:// 192.168.124.2 /redhat/iso quiet initrdefi ../../images/pxeboot/initrd.img } submenu 'Troubleshooting -->' { menuentry 'Install Red Hat Enterprise Linux 9.3 in text mode' --class fedora --class gnu-linux --class gnu --class os { linuxefi ../../images/pxeboot/vmlinuz inst.repo=http:// 192.168.124.2 /redhat/iso inst.text quiet initrdefi ../../images/pxeboot/initrd.img } menuentry 'Rescue a Red Hat Enterprise Linux system' --class fedora --class gnu-linux --class gnu --class os { linuxefi ../../images/pxeboot/vmlinuz inst.repo=http:// 192.168.124.2 /redhat/iso inst.rescue quiet initrdefi ../../images/pxeboot/initrd.img } }",
"chmod 755 /var/www/html/redhat/EFI/BOOT/BOOTX64.EFI",
"firewall-cmd --zone public --add-port={80/tcp,67/udp,68/udp,546/udp,547/udp}",
"firewall-cmd --reload",
"systemctl enable --now httpd",
"chmod -cR u=rwX,g=rX,o=rX /var/www/html",
"restorecon -FvvR /var/www/html"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/interactively_installing_rhel_over_the_network/preparing-to-install-from-the-network-using-http_rhel-installer
|
1.3. Available Services
|
1.3. Available Services All Red Hat Enterprise Linux systems have some services already available to configure authentication for local users on local systems. These include: Authentication Setup The Authentication Configuration tool ( authconfig ) sets up different identity back ends and means of authentication (such as passwords, fingerprints, or smart cards) for the system. Identity Back End Setup The Security System Services Daemon (SSSD) sets up multiple identity providers (primarily LDAP-based directories such as Microsoft Active Directory or Red Hat Enterprise Linux IdM) which can then be used by both the local system and applications for users. Passwords and tickets are cached, allowing both offline authentication and single sign-on by reusing credentials. The realmd service is a command-line utility that allows you to configure an authentication back end, which is SSSD for IdM. The realmd service detects available IdM domains based on the DNS records, configures SSSD, and then joins the system as an account to a domain. Name Service Switch (NSS) is a mechanism for low-level system calls that return information about users, groups, or hosts. NSS determines what source, that is, which modules, should be used to obtain the required information. For example, user information can be located in traditional UNIX files, such as the /etc/passwd file, or in LDAP-based directories, while host addresses can be read from files, such as the /etc/hosts file, or the DNS records; NSS locates where the information is stored. Authentication Mechanisms Pluggable Authentication Modules (PAM) provide a system to set up authentication policies. An application using PAM for authentication loads different modules that control different aspects of authentication; which PAM module an application uses is based on how the application is configured. The available PAM modules include Kerberos, Winbind, or local UNIX file-based authentication. Other services and applications are also available, but these are common ones.
| null |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/system-level_authentication_guide/default-options
|
Support
|
Support Red Hat build of MicroShift 4.18 Using support tools for MicroShift Red Hat OpenShift Documentation Team
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_microshift/4.18/html/support/index
|
Chapter 8. Desktop
|
Chapter 8. Desktop GNOME Shell rebased to version 3.28 In Red Hat Enterprise Linux 7.6, GNOME Shell has been rebased to upstream version 3.28. Notable enhancements include: New GNOME Boxes features New on-screen keyboard Extended devices support, most significantly integration for the Thunderbolt 3 interface Improvements for GNOME Software, dconf-editor and GNOME Terminal Note that Nautilus file manager has been kept in version 3.26 to preserve the behavior of the desktop icons. (BZ#1567133) The sane-backends package is now built with systemd support Scanner Access Now Easy (SANE) is a universal scanner interface whose backend's and library's features are provided by the sane-backends package. This update brings the following changes to SANE: The sane-backends package is built with systemd support. The saned daemon can be run without the need to create unit files manually, because these files are now shipped with sane-backends . (BZ# 1512252 ) FreeType rebased to version 2.8 The FreeType font engine has been rebased to version 2.8, which is required by GNOME 3.28. The 2.8 version has been modified to be API and Application Binary Interface (ABI) compatible with the version 2.4.11. (BZ# 1576504 ) Nvidia Volta-based graphics cards are now supported This update adds support for Nvidia Volta-based graphics cards. As a result, the modesetting user-space driver, which is able to handle the basic operations and single graphic output, is used. However, 3D graphic is handled by the llvmpipe driver because Nvidia did not share public signed firmware for 3D. To reach maximum performance of the card, use the Nvidia binary driver. (BZ#1457161) xorg-x11-server rebased to version 1.20.0-0.1 The xorg-x11-server packages have been rebased to upstream version 1.20.0-0.1, which provides a number of bug fixes and enhancements over the version: Added support for the following input devices: Wacom Cintiq Pro 24, Wacom Cintiq Pro 32 tablet, Wacom Pro Pen 3D. Added support for Intel Cannon Lake and Whiskey Lake platform GPUs. Added support for S3TC texture compression in OpenGL Added support for X11 backing store always mode. Added support for Nvidia Volta series of graphics. Added support for AMD Vega graphics and Raven APU. (BZ#1564632)
| null |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.6_release_notes/new_features_desktop
|
Chapter 4. About Logging
|
Chapter 4. About Logging As a cluster administrator, you can deploy logging on an OpenShift Container Platform cluster, and use it to collect and aggregate node system audit logs, application container logs, and infrastructure logs. You can forward logs to your chosen log outputs, including on-cluster, Red Hat managed log storage. You can also visualize your log data in the OpenShift Container Platform web console, or the Kibana web console, depending on your deployed log storage solution. Note The Kibana web console is now deprecated is planned to be removed in a future logging release. OpenShift Container Platform cluster administrators can deploy logging by using Operators. For information, see Installing logging . The Operators are responsible for deploying, upgrading, and maintaining logging. After the Operators are installed, you can create a ClusterLogging custom resource (CR) to schedule logging pods and other resources necessary to support logging. You can also create a ClusterLogForwarder CR to specify which logs are collected, how they are transformed, and where they are forwarded to. Note Because the internal OpenShift Container Platform Elasticsearch log store does not provide secure storage for audit logs, audit logs are not stored in the internal Elasticsearch instance by default. If you want to send the audit logs to the default internal Elasticsearch log store, for example to view the audit logs in Kibana, you must use the Log Forwarding API as described in Forward audit logs to the log store . 4.1. Logging architecture The major components of the logging are: Collector The collector is a daemonset that deploys pods to each OpenShift Container Platform node. It collects log data from each node, transforms the data, and forwards it to configured outputs. You can use the Vector collector or the legacy Fluentd collector. Note Fluentd is deprecated and is planned to be removed in a future release. Red Hat provides bug fixes and support for this feature during the current release lifecycle, but this feature no longer receives enhancements. As an alternative to Fluentd, you can use Vector instead. Log store The log store stores log data for analysis and is the default output for the log forwarder. You can use the default LokiStack log store, the legacy Elasticsearch log store, or forward logs to additional external log stores. Note The Logging 5.9 release does not contain an updated version of the OpenShift Elasticsearch Operator. If you currently use the OpenShift Elasticsearch Operator released with Logging 5.8, it will continue to work with Logging until the EOL of Logging 5.8. As an alternative to using the OpenShift Elasticsearch Operator to manage the default log storage, you can use the Loki Operator. For more information on the Logging lifecycle dates, see Platform Agnostic Operators . Visualization You can use a UI component to view a visual representation of your log data. The UI provides a graphical interface to search, query, and view stored logs. The OpenShift Container Platform web console UI is provided by enabling the OpenShift Container Platform console plugin. Note The Kibana web console is now deprecated is planned to be removed in a future logging release. Logging collects container logs and node logs. These are categorized into types: Application logs Container logs generated by user applications running in the cluster, except infrastructure container applications. Infrastructure logs Container logs generated by infrastructure namespaces: openshift* , kube* , or default , as well as journald messages from nodes. Audit logs Logs generated by auditd, the node audit system, which are stored in the /var/log/audit/audit.log file, and logs from the auditd , kube-apiserver , openshift-apiserver services, as well as the ovn project if enabled. Additional resources Log visualization with the web console 4.2. About deploying logging Administrators can deploy the logging by using the OpenShift Container Platform web console or the OpenShift CLI ( oc ) to install the logging Operators. The Operators are responsible for deploying, upgrading, and maintaining the logging. Administrators and application developers can view the logs of the projects for which they have view access. 4.2.1. Logging custom resources You can configure your logging deployment with custom resource (CR) YAML files implemented by each Operator. Red Hat OpenShift Logging Operator : ClusterLogging (CL) - After the Operators are installed, you create a ClusterLogging custom resource (CR) to schedule logging pods and other resources necessary to support the logging. The ClusterLogging CR deploys the collector and forwarder, which currently are both implemented by a daemonset running on each node. The Red Hat OpenShift Logging Operator watches the ClusterLogging CR and adjusts the logging deployment accordingly. ClusterLogForwarder (CLF) - Generates collector configuration to forward logs per user configuration. Loki Operator : LokiStack - Controls the Loki cluster as log store and the web proxy with OpenShift Container Platform authentication integration to enforce multi-tenancy. OpenShift Elasticsearch Operator : Note These CRs are generated and managed by the OpenShift Elasticsearch Operator. Manual changes cannot be made without being overwritten by the Operator. ElasticSearch - Configure and deploy an Elasticsearch instance as the default log store. Kibana - Configure and deploy Kibana instance to search, query and view logs. 4.2.2. About JSON OpenShift Container Platform Logging You can use JSON logging to configure the Log Forwarding API to parse JSON strings into a structured object. You can perform the following tasks: Parse JSON logs Configure JSON log data for Elasticsearch Forward JSON logs to the Elasticsearch log store 4.2.3. About collecting and storing Kubernetes events The OpenShift Container Platform Event Router is a pod that watches Kubernetes events and logs them for collection by OpenShift Container Platform Logging. You must manually deploy the Event Router. For information, see About collecting and storing Kubernetes events . 4.2.4. About troubleshooting OpenShift Container Platform Logging You can troubleshoot the logging issues by performing the following tasks: Viewing logging status Viewing the status of the log store Understanding logging alerts Collecting logging data for Red Hat Support Troubleshooting for critical alerts 4.2.5. About exporting fields The logging system exports fields. Exported fields are present in the log records and are available for searching from Elasticsearch and Kibana. For information, see About exporting fields . 4.2.6. About event routing The Event Router is a pod that watches OpenShift Container Platform events so they can be collected by logging. The Event Router collects events from all projects and writes them to STDOUT . Fluentd collects those events and forwards them into the OpenShift Container Platform Elasticsearch instance. Elasticsearch indexes the events to the infra index. You must manually deploy the Event Router. For information, see Collecting and storing Kubernetes events .
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/logging/cluster-logging
|
5.6. Red Hat JBoss Data Grid and Red Hat JBoss Fuse
|
5.6. Red Hat JBoss Data Grid and Red Hat JBoss Fuse 5.6.1. Installing camel-jbossdatagrid for Red Hat JBoss Fuse Red Hat JBoss Fuse is an OSGi container based on the Karaf container. To run Red Hat JBoss Data Grid and JBoss Fuse using camel-jbossdatagrid , ensure that both JBoss Data Grid 6.6 and JBoss Fuse 6.1 (Full Installation) are installed. Procedure 5.1. Installing JBoss Data Grid For information about installing JBoss Data Grid, see Part II, "Download and Install Red Hat JBoss Data Grid" . Only the following JBoss Data Grid components are required to run the camel component in JBoss Fuse: JBoss Data Grid Maven repository. The JBoss Data Grid Server package (to use the Hot Rod client). The camel-jbossdatagrid library is also available in a separate distribution called jboss-datagrid-6.6.1-camel-library . Procedure 5.2. Installing JBoss Fuse Prerequisites Before attempting to install and use Red Hat JBoss Fuse, ensure your system meets the minimum requirements. For supported Platforms and recommended Java Runtime platforms, see the Red Hat JBoss Fuse Installation Guide . The following hardware is required for the JBoss Fuse 6.1 Full Installation. 700 MB of free disk space 2 GB of RAM In addition to the disk space required for the base installation, a running system will require space for caching, persistent message stores, and other functions. Download the JBoss Fuse Full Installation You can download the Red Hat JBoss Fuse archive from the Red Hat Customer Portal>Downloads>Red Hat JBoss Middleware>Downloads page, after you register and login to your customer account. When logged in: Select Fuse , listed under Integrated Platforms in the sidebar menu. Select 6.1.0 from the Version drop-down list on the Software Downloads page. Click the Download button to the Red Hat JBoss Fuse 6.1.0 distribution file to download. JBoss Fuse allows you to choose between installations that contain different feature sets. To run JBoss Data Grid with JBoss Fuse, the Full installation is required. The Full installation includes the following: Apache Karaf Apache Camel Apache ActiveAMQ Apache CXF Fuse Management Console (hawtio) JBI components Unpacking the Archive Red Hat JBoss Fuse is installed by unpacking an archive on a system. JBoss Fuse is packaged as a zip file. Using a suitable archive tool, unpack Red Hat JBoss Fuse into a directory to which you have full access. Warning Do not unpack the archive file into a folder that has spaces in its path name. For example, do not unpack into C:\Documents and Settings\Greco Roman\Desktop\fusesrc. Additionally, do not unpack the archive file into a folder that has any of the following special characters in its path name: #, %, ^, ". Adding a Remote Console User The server's remote command console is not configured with a default user. Before remotely connecting to the server's console, add a user to the configuration. Important The information in this file is unencrypted so it is not suitable for environments that require strict security. To add a user: Open InstallDir/etc/users.properties in your favorite text editor. Locate the line #admin=admin,admin . This line specifies a user admin with the password admin and the role admin . Remove the leading # to uncomment the line. Replace the first admin with a name for the user. Replace the second admin with the password for the user. Leave the last admin as it is, and save the changes. Note To access the Fuse Management Console to monitor and manage your Camel routes, ActiveMQ brokers, Web applications, and so on, open a browser to http://localhost:8181/hawtio , after starting Red Hat JBoss Fuse. Red Hat JBoss Fuse Maven Repositories To use Maven to build projects, specify the location of the artifacts in a Maven settings.xml file. The following JBoss Fuse Maven repository contains the required dependencies for Camel and must be added to the settings.xml file. The JBoss Fuse repository runs alongside the JBoss Data Grid repository. JBoss Data Grid includes a features.xml file for Karaf that deploys all artifacts required for the camel-jbossdatagrid component. This file is not included in the JBoss Fuse container distribution. The features.xml file is in jboss-datagrid-6.6.1-maven-repository/org/apache/camel/camel-jbossdatagrid/USD{version}/ . No further configuration of the JBoss Data Grid repository is required. For more information about installing and getting started with JBoss Fuse, see the Red Hat JBoss Fuse documentation on the Red Hat Customer Portal. Report a bug
|
[
"https://repo.fusesource.com/nexus/content/groups/public/"
] |
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/getting_started_guide/sect-red_hat_jboss_data_grid_and_red_hat_jboss_fuse
|
function::indent_depth
|
function::indent_depth Name function::indent_depth - returns the global nested-depth Synopsis Arguments delta the amount of depth added/removed for each call Description This function returns a number for appropriate indentation, similar to indent . Call it with a small positive or matching negative delta. Unlike the thread_indent_depth function, the indent does not track individual indent values on a per thread basis.
|
[
"indent_depth:long(delta:long)"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-indent-depth
|
function::delete_stopwatch
|
function::delete_stopwatch Name function::delete_stopwatch - Remove an existing stopwatch Synopsis Arguments name the stopwatch name Description Remove stopwatch name .
|
[
"delete_stopwatch(name:string)"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-delete-stopwatch
|
Chapter 1. Support policy for Red Hat build of OpenJDK
|
Chapter 1. Support policy for Red Hat build of OpenJDK Red Hat will support select major versions of Red Hat build of OpenJDK in its products. For consistency, these versions remain similar to Oracle JDK versions that are designated as long-term support (LTS). A major version of Red Hat build of OpenJDK will be supported for a minimum of six years from the time that version is first introduced. For more information, see the OpenJDK Life Cycle and Support Policy . Note RHEL 6 reached the end of life in November 2020. Because of this, Red Hat build of OpenJDK is not supporting RHEL 6 as a supported configuration.
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/21/html/release_notes_for_red_hat_build_of_openjdk_21.0.4/rn-openjdk-support-policy
|
Chapter 1. Installing tkn
|
Chapter 1. Installing tkn Use the CLI tool to manage Red Hat OpenShift Pipelines from a terminal. You can install the CLI tool on different platforms. Note Both the archives and the RPMs contain the following executables: tkn tkn-pac opc Important Running Red Hat OpenShift Pipelines with the opc CLI tool is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 1.1. Installing the Red Hat OpenShift Pipelines CLI on Linux For Linux distributions, you can download the CLI as a tar.gz archive. Procedure Download the relevant CLI tool. Linux (x86_64, amd64) Linux on IBM zSystems and IBM(R) LinuxONE (s390x) Linux on IBM Power (ppc64le) Linux on ARM (aarch64, arm64) Unpack the archive: USD tar xvzf <file> Add the location of your tkn , tkn-pac , and opc files to your PATH environment variable. To check your PATH , run the following command: USD echo USDPATH 1.2. Installing the Red Hat OpenShift Pipelines CLI on Linux using an RPM For Red Hat Enterprise Linux (RHEL) version 8, you can install the Red Hat OpenShift Pipelines CLI as an RPM. Prerequisites You have an active OpenShift Container Platform subscription on your Red Hat account. You have root or sudo privileges on your local system. Procedure Register with Red Hat Subscription Manager: # subscription-manager register Pull the latest subscription data: # subscription-manager refresh List the available subscriptions: # subscription-manager list --available --matches '*pipelines*' In the output for the command, find the pool ID for your OpenShift Container Platform subscription and attach the subscription to the registered system: # subscription-manager attach --pool=<pool_id> Enable the repositories required by Red Hat OpenShift Pipelines: Linux (x86_64, amd64) # subscription-manager repos --enable="pipelines-1.18-for-rhel-8-x86_64-rpms" Linux on IBM zSystems and IBM(R) LinuxONE (s390x) # subscription-manager repos --enable="pipelines-1.18-for-rhel-8-s390x-rpms" Linux on IBM Power (ppc64le) # subscription-manager repos --enable="pipelines-1.18-for-rhel-8-ppc64le-rpms" Linux on ARM (aarch64, arm64) # subscription-manager repos --enable="pipelines-1.18-for-rhel-8-aarch64-rpms" Install the openshift-pipelines-client package: # yum install openshift-pipelines-client After you install the CLI, it is available using the tkn command: USD tkn version 1.3. Installing the Red Hat OpenShift Pipelines CLI on Windows For Windows, you can download the CLI as a zip archive. Procedure Download the CLI tool . Extract the archive with a ZIP program. Add the location of your tkn , tkn-pac , and opc files to your PATH environment variable. To check your PATH , run the following command: C:\> path 1.4. Installing the Red Hat OpenShift Pipelines CLI on macOS For macOS, you can download the CLI as a tar.gz archive. Procedure Download the relevant CLI tool. macOS macOS on ARM Unpack and extract the archive. Add the location of your tkn , tkn-pac , and opc files to your PATH environment variable. To check your PATH , run the following command: USD echo USDPATH
|
[
"tar xvzf <file>",
"echo USDPATH",
"subscription-manager register",
"subscription-manager refresh",
"subscription-manager list --available --matches '*pipelines*'",
"subscription-manager attach --pool=<pool_id>",
"subscription-manager repos --enable=\"pipelines-1.18-for-rhel-8-x86_64-rpms\"",
"subscription-manager repos --enable=\"pipelines-1.18-for-rhel-8-s390x-rpms\"",
"subscription-manager repos --enable=\"pipelines-1.18-for-rhel-8-ppc64le-rpms\"",
"subscription-manager repos --enable=\"pipelines-1.18-for-rhel-8-aarch64-rpms\"",
"yum install openshift-pipelines-client",
"tkn version",
"C:\\> path",
"echo USDPATH"
] |
https://docs.redhat.com/en/documentation/red_hat_openshift_pipelines/1.18/html/pipelines_cli_tkn_reference/installing-tkn
|
6.3. Confining Existing Linux Users: semanage login
|
6.3. Confining Existing Linux Users: semanage login If a Linux user is mapped to the SELinux unconfined_u user (the default behavior), and you would like to change which SELinux user they are mapped to, use the semanage login command. The following example creates a new Linux user named newuser , then maps that Linux user to the SELinux user_u user: Procedure 6.2. Mapping Linux Users to the SELinux Users As root, create a new Linux user ( newuser ). Since this user uses the default mapping, it does not appear in the semanage login -l output: To map the Linux newuser user to the SELinux user_u user, enter the following command as root: The -a option adds a new record, and the -s option specifies the SELinux user to map a Linux user to. The last argument, newuser , is the Linux user you want mapped to the specified SELinux user. To view the mapping between the Linux newuser user and user_u , use the semanage utility again: As root, assign a password to the Linux newuser user: Log out of your current session, and log in as the Linux newuser user. Enter the following command to view the newuser 's SELinux context: Log out of the Linux newuser 's session, and log back in with your account. If you do not want the Linux newuser user, enter the following command as root to remove it, along with its home directory: As root, remove the mapping between the Linux newuser user and user_u :
|
[
"~]# useradd newuser",
"~]# semanage login -l Login Name SELinux User MLS/MCS Range Service __default__ unconfined_u s0-s0:c0.c1023 * root unconfined_u s0-s0:c0.c1023 * system_u system_u s0-s0:c0.c1023 *",
"~]# semanage login -a -s user_u newuser",
"~]# semanage login -l Login Name SELinux User MLS/MCS Range Service __default__ unconfined_u s0-s0:c0.c1023 * newuser user_u s0 * root unconfined_u s0-s0:c0.c1023 * system_u system_u s0-s0:c0.c1023 *",
"~]# passwd newuser Changing password for user newuser. New password: Enter a password Retype new password: Enter the same password again passwd: all authentication tokens updated successfully.",
"~]USD id -Z user_u:user_r:user_t:s0",
"~]# userdel -r newuser",
"~]# semanage login -d newuser",
"~]# semanage login -l Login Name SELinux User MLS/MCS Range Service __default__ unconfined_u s0-s0:c0.c1023 * root unconfined_u s0-s0:c0.c1023 * system_u system_u s0-s0:c0.c1023 *"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/selinux_users_and_administrators_guide/sect-security-enhanced_linux-confining_users-confining_existing_linux_users_semanage_login
|
B.29. hplip
|
B.29. hplip B.29.1. RHSA-2011:0154 - Moderate: hplip security update Updated hplip packages that fix one security issue are now available for Red Hat Enterprise Linux 5 and 6. The Red Hat Security Response Team has rated this update as having moderate security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. Hewlett-Packard Linux Imaging and Printing (HPLIP) provides drivers for Hewlett-Packard printers and multifunction peripherals, and tools for installing, using, and configuring them. CVE-2010-4267 A flaw was found in the way certain HPLIP tools discovered devices using the SNMP protocol. If a user ran certain HPLIP tools that search for supported devices using SNMP, and a malicious user is able to send specially-crafted SNMP responses, it could cause those HPLIP tools to crash or, possibly, execute arbitrary code with the privileges of the user running them. Red Hat would like to thank Sebastian Krahmer of the SuSE Security Team for reporting this issue. Users of hplip should upgrade to these updated packages, which contain a backported patch to correct this issue.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/hplip
|
Chapter 4. Encrypting and validating OpenStack services
|
Chapter 4. Encrypting and validating OpenStack services You can use barbican to encrypt and validate several Red Hat OpenStack Platform services, such as Block Storage (cinder) encryption keys, Block Storage volume images, Object Storage (swift) objects, and Image Service (glance) images. Important Nova formats encrypted volumes during their first use if they are unencrypted. The resulting block device is then presented to the Compute node. Guidelines for containerized services Do not update any configuration file you might find on the physical node's host operating system, for example, /etc/cinder/cinder.conf . The containerized service does not reference this file. Do not update the configuration file running within the container. Changes are lost once you restart the container. Instead, if you must change containerized services, update the configuration file in /var/lib/config-data/puppet-generated/ , which is used to generate the container. For example: keystone: /var/lib/config-data/puppet-generated/keystone/etc/keystone/keystone.conf cinder: /var/lib/config-data/puppet-generated/cinder/etc/cinder/cinder.conf nova: /var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf Changes are applied after you restart the container. 4.1. Encrypting Object Storage (swift) at-rest objects By default, objects uploaded to Object Storage (swift) are stored unencrypted. Because of this, it is possible to access objects directly from the file system. This can present a security risk if disks are not properly erased before they are discarded. When you have barbican enabled, the Object Storage service (swift) can transparently encrypt and decrypt your stored (at-rest) objects. At-rest encryption is distinct from in-transit encryption in that it refers to the objects being encrypted while being stored on disk. Swift performs these encryption tasks transparently, with the objects being automatically encrypted when uploaded to swift, then automatically decrypted when served to a user. This encryption and decryption is done using the same (symmetric) key, which is stored in barbican. Note You cannot disable encryption after you have enabled encryption and added data to the swift cluster, because the data is now stored in an encrypted state. Consequently, the data will not be readable if encryption is disabled, until you re-enable encryption with the same key. Prerequisites OpenStack Key Manager is installed and enabled Procedure Include the SwiftEncryptionEnabled: True parameter in your environment file, then re-running openstack overcloud deploy using /home/stack/overcloud_deploy.sh . Confirm that swift is configured to use at-rest encryption: The result should include an entry for encryption . 4.2. Encrypting Block Storage (cinder) volumes You can use barbican to manage your Block Storage (cinder) encryption keys. This configuration uses LUKS to encrypt the disks attached to your instances, including boot disks. Key management is transparent to the user; when you create a new volume using luks as the encryption type, cinder generates a symmetric key secret for the volume and stores it in barbican. When booting the instance (or attaching an encrypted volume), nova retrieves the key from barbican and stores the secret locally as a Libvirt secret on the Compute node. Procedure On nodes running the cinder-volume and nova-compute services, confirm that nova and cinder are both configured to use barbican for key management: Create a volume template that uses encryption. When you create new volumes they can be modeled off the settings you define here: Create a new volume and specify that it uses the LuksEncryptor-Template-256 settings: The resulting secret is automatically uploaded to the barbican back end. Note Ensure that the user creating the encrypted volume has the creator barbican role on the project. For more information, see the Grant user access to the creator role section. Use barbican to confirm that the disk encryption key is present. In this example, the timestamp matches the LUKS volume creation time: Attach the new volume to an existing instance. For example: The volume is then presented to the guest operating system and can be mounted using the built-in tools. 4.2.1. Migrating Block Storage volumes to OpenStack Key Manager If you previously used ConfKeyManager to manage disk encryption keys, you can migrate the volumes to OpenStack Key Manager by scanning the databases for encryption_key_id entries within scope for migration to barbican. Each entry gets a new barbican key ID and the existing ConfKeyManager secret is retained. Note Previously, you could reassign ownership for volumes encrypted using ConfKeyManager . This is not possible for volumes that have their keys managed by barbican. Activating barbican will not break your existing keymgr volumes. Prerequisites Before you migrate, review the following differences between Barbican-managed encrypted volumes and volumes that use ConfKeyManager : You cannot transfer ownership of encrypted volumes, because it is not currently possible to transfer ownership of the barbican secret. Barbican is more restrictive about who is allowed to read and delete secrets, which can affect some cinder volume operations. For example, a user cannot attach, detach, or delete a different user's volumes. Procedure Deploy the barbican service. Add the creator role to the cinder service. For example: Restart the cinder-volume and cinder-backup services. The cinder-volume and cinder-backup services automatically begin the migration process. You can check the log files to view status information about the migration: cinder-volume - migrates keys stored in cinder's Volumes and Snapshots tables. cinder-backup - migrates keys in the Backups table. Monitor the logs for the message indicating migration has finished and check that no more volumes are using the ConfKeyManager all-zeros encryption key ID. Remove the fixed_key option from cinder.conf and nova.conf . You must determine which nodes have this setting configured. Remove the creator role from the cinder service. Verification After you start the process, one of these entries appears in the log files. This indicates whether the migration started correctly, or it identifies the issue it encountered: Not migrating encryption keys because the ConfKeyManager is still in use. Not migrating encryption keys because the ConfKeyManager's fixed_key is not in use. Not migrating encryption keys because migration to the 'XXX' key_manager backend is not supported. - This message is unlikely to appear; it is a safety check to handle the code ever encountering another Key Manager back end other than barbican. This is because the code only supports one migration scenario: From ConfKeyManager to barbican. Not migrating encryption keys because there are no volumes associated with this host. - This can occur when cinder-volume is running on multiple hosts, and a particular host has no volumes associated with it. This arises because every host is responsible for handling its own volumes. Starting migration of ConfKeyManager keys. Migrating volume <UUID> encryption key to Barbican - During migration, all of the host's volumes are examined, and if a volume is still using the ConfKeyManager's key ID (identified by the fact that it's all zeros ( 00000000-0000-0000-0000-000000000000 )), then this message appears. For cinder-backup , this message uses slightly different capitalization: Migrating Volume [...] or Migrating Backup [...] After each host examines all of its volumes, the host displays a summary status message: You may also see the following entries: There are still %d volume(s) using the ConfKeyManager's all-zeros encryption key ID. There are still %d backup(s) using the ConfKeyManager's all-zeros encryption key ID. Both of these messages can appear in the cinder-volume and cinder-backup logs. Whereas each service only handles the migration of its own entries, the service is aware of the the other's status. As a result, cinder-volume knows if cinder-backup still has backups to migrate, and cinder-backup knows if the cinder-volume service has volumes to migrate. Although each host migrates only its own volumes, the summary message is based on a global assessment of whether any volume still requires migration This allows you to confirm that migration for all volumes is complete. Cleanup After migrating your key IDs into barbican, the fixed key remains in the configuration files. This can present a security concern to some users, because the fixed_key value is not encrypted in the .conf files. To address this, you can manually remove the fixed_key values from your nova and cinder configurations. However, first complete testing and review the output of the log file before you proceed, because disks that are still dependent on this value are not accessible. Important The encryption_key_id was only recently added to the Backup table, as part of the Queens release. As a result, pre-existing backups of encrypted volumes are likely to exist. The all-zeros encryption_key_id is stored on the backup itself, but it does not appear in the Backup database. As such, it is impossible for the migration process to know for certain whether a backup of an encrypted volume exists that still relies on the all-zeros ConfKeyMgr key ID. Review the existing fixed_key values. The values must match for both services. Important Make a backup of the existing fixed_key values. This allows you to restore the value if something goes wrong, or if you need to restore a backup that uses the old encryption key. Delete the fixed_key values: Troubleshooting The barbican secret can only be created when the requestor has the creator role. This means that the cinder service itself requires the creator role, otherwise a log sequence similar to this will occur: Starting migration of ConfKeyManager keys. Migrating volume <UUID> encryption key to Barbican Error migrating encryption key: Forbidden: Secret creation attempt not allowed - please review your user/project privileges There are still %d volume(s) using the ConfKeyManager's all-zeros encryption key ID. The key message is the third one: Secret creation attempt not allowed. To fix the problem, update the cinder account's privileges: Run openstack role add --project service --user cinder creator Restart the cinder-volume and cinder-backup services. As a result, the attempt at migration should succeed. 4.3. Validating Block Storage (cinder) volume images The Block Storage Service (cinder) automatically validates the signature of any downloaded, signed image during volume from image creation. The signature is validated before the image is written to the volume. To improve performance, you can use the Block Storage Image-Volume cache to store validated images for creating new volumes. Note Cinder image signature validation is not supported with Red Hat Ceph Storage or RBD volumes. Procedure Log in to a Controller node. Choose one of the following options: View cinder's image validation activities in the Volume log, /var/log/containers/cinder/cinder-volume.log . For example, you can expect the following entry when the instance is booted: Use the openstack volume list and cinder volume show commands: Use the openstack volume list command to locate the volume ID. Run the cinder volume show command on a compute node: Locate the volume_image_metadata section with the line signature verified : True . Note Snapshots are saved as Image service (glance) images. If you configure the Compute service (nova) to check for signed images, then you must manually download the image from glance, sign the image, and then re-upload the image. This is true whether the snapshot is from an instance created with signed images, or an instance booted from a volume created from a signed image. Note A volume can be uploaded as an Image service (glance) image. If the original volume was bootable, the image can be used to create a bootable volume in the Block Storage service (cinder). If you have configured the Block Storage service to check for signed images then you must manually download the image from glance, compute the image signature and update all appropriate image signature properties before using the image. For more information, see Section 4.5, "Validating snapshots" . Additional resources Configuring the Block Storage service (cinder) 4.3.1. Automatic deletion of volume image encryption key The Block Storage service (cinder) creates an encryption key in the Key Management service (barbican) when it uploads an encrypted volume to the Image service (glance). This creates a 1:1 relationship between an encryption key and a stored image. Encryption key deletion prevents unlimited resource consumption of the Key Management service. The Block Storage, Key Management, and Image services automatically manage the key for an encrypted volume, including the deletion of the key. The Block Storage service automatically adds two properties to a volume image: cinder_encryption_key_id - The identifier of the encryption key that the Key Management service stores for a specific image. cinder_encryption_key_deletion_policy - The policy that tells the Image service to tell the Key Management service whether to delete the key associated with this image. Important The values of these properties are automatically assigned. To avoid unintentional data loss, do not adjust these values . When you create a volume image, the Block Storage service sets the cinder_encryption_key_deletion_policy property to on_image_deletion . When you delete a volume image, the Image service deletes the corresponding encryption key if the cinder_encryption_key_deletion_policy equals on_image_deletion . Important Red Hat does not recommend manual manipulation of the cinder_encryption_key_id or cinder_encryption_key_deletion_policy properties. If you use the encryption key that is identified by the value of cinder_encryption_key_id for any other purpose, you risk data loss. 4.4. Signing Image Service (glance) images When you configure the Image Service (glance) to verify that an uploaded image has not been tampered with, you must sign images before you can start an instance using those images. Use the openssl command to sign an image with a key that is stored in barbican, then upload the image to glance with the accompanying signing information. As a result, the image's signature is verified before each use, with the instance build process failing if the signature does not match. Prerequisites OpenStack Key Manager is installed and enabled Procedure In your environment file, enable image verification with the VerifyGlanceSignatures: True setting. You must re-run the openstack overcloud deploy command for this setting to take effect. To verify that glance image validation is enabled, run the following command on an overcloud Compute node: Note If you use Ceph as the back end for the Image and Compute services, a CoW clone is created. Therefore, Image signing verification cannot be performed. Confirm that glance is configured to use barbican: Generate a certificate: Add the certificate to the barbican secret store: Note Record the resulting UUID for use in a later step. In this example, the certificate's UUID is 5df14c2b-f221-4a02-948e-48a61edd3f5b . Use private_key.pem to sign the image and generate the .signature file. For example: Convert the resulting .signature file into base64 format: Load the base64 value into a variable to use it in the subsequent command: Upload the signed image to glance. For img_signature_certificate_uuid , you must specify the UUID of the signing key you previously uploaded to barbican: You can view glance's image validation activities in the Compute log: /var/log/containers/nova/nova-compute.log . For example, you can expect the following entry when the instance is booted: 4.5. Validating snapshots Snapshots are saved as Image service (glance) images. If you configure the Compute service (nova) to check for signed images, then snapshots must by signed, even if they were created from an instance with a signed image. Procedure Download the snapshot from glance Generate to signature to validate the snapshot. This is the same process you use when you generate a signature to validate any image. For more information, see Validating Image Service (glance) images . Update the image properties: Optional: Remove the downloaded glance image from the filesystem:
|
[
"crudini --get /var/lib/config-data/puppet-generated/swift/etc/swift/proxy-server.conf pipeline-main pipeline pipeline = catch_errors healthcheck proxy-logging cache ratelimit bulk tempurl formpost authtoken keystone staticweb copy container_quotas account_quotas slo dlo versioned_writes kms_keymaster encryption proxy-logging proxy-server",
"crudini --get /var/lib/config-data/puppet-generated/cinder/etc/cinder/cinder.conf key_manager backend castellan.key_manager.barbican_key_manager.BarbicanKeyManager crudini --get /var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf key_manager backend castellan.key_manager.barbican_key_manager.BarbicanKeyManager",
"openstack volume type create --encryption-provider nova.volume.encryptors.luks.LuksEncryptor --encryption-cipher aes-xts-plain64 --encryption-key-size 256 --encryption-control-location front-end LuksEncryptor-Template-256 +-------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +-------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | description | None | | encryption | cipher='aes-xts-plain64', control_location='front-end', encryption_id='9df604d0-8584-4ce8-b450-e13e6316c4d3', key_size='256', provider='nova.volume.encryptors.luks.LuksEncryptor' | | id | 78898a82-8f4c-44b2-a460-40a5da9e4d59 | | is_public | True | | name | LuksEncryptor-Template-256 | +-------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+",
"openstack volume create --size 1 --type LuksEncryptor-Template-256 'Encrypted-Test-Volume' +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2018-01-22T00:19:06.000000 | | description | None | | encrypted | True | | id | a361fd0b-882a-46cc-a669-c633630b5c93 | | migration_status | None | | multiattach | False | | name | Encrypted-Test-Volume | | properties | | | replication_status | None | | size | 1 | | snapshot_id | None | | source_volid | None | | status | creating | | type | LuksEncryptor-Template-256 | | updated_at | None | | user_id | 0e73cb3111614365a144e7f8f1a972af | +---------------------+--------------------------------------+",
"openstack secret list +------------------------------------------------------------------------------------+------+---------------------------+--------+-------------------------------------------+-----------+------------+-------------+------+------------+ | Secret href | Name | Created | Status | Content types | Algorithm | Bit length | Secret type | Mode | Expiration | +------------------------------------------------------------------------------------+------+---------------------------+--------+-------------------------------------------+-----------+------------+-------------+------+------------+ | https://192.168.123.169:9311/v1/secrets/24845e6d-64a5-4071-ba99-0fdd1046172e | None | 2018-01-22T02:23:15+00:00 | ACTIVE | {u'default': u'application/octet-stream'} | aes | 256 | symmetric | None | None | +------------------------------------------------------------------------------------+------+---------------------------+--------+-------------------------------------------+-----------+------------+-------------+------+------------+",
"openstack server add volume testInstance Encrypted-Test-Volume",
"#openstack role create creator #openstack role add --user cinder creator --project service",
"`No volumes are using the ConfKeyManager's encryption_key_id.` `No backups are known to be using the ConfKeyManager's encryption_key_id.`",
"crudini --get /var/lib/config-data/puppet-generated/cinder/etc/cinder/cinder.conf keymgr fixed_key crudini --get /var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf keymgr fixed_key",
"crudini --del /var/lib/config-data/puppet-generated/cinder/etc/cinder/cinder.conf keymgr fixed_key crudini --del /var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf keymgr fixed_key",
"2018-05-24 12:48:35.256 1 INFO cinder.image.image_utils [req-7c271904-4975-4771-9d26-cbea6c0ade31 b464b2fd2a2140e9a88bbdacf67bdd8c a3db2f2beaee454182c95b646fa7331f - default default] Image signature verification succeeded for image d3396fa0-2ea2-4832-8a77-d36fa3f2ab27",
"cinder volume show <VOLUME_ID>",
"cinder show d0db26bb-449d-4111-a59a-6fbb080bb483 +--------------------------------+-------------------------------------------------+ | Property | Value | +--------------------------------+-------------------------------------------------+ | attached_servers | [] | | attachment_ids | [] | | availability_zone | nova | | bootable | true | | consistencygroup_id | None | | created_at | 2018-10-12T19:04:41.000000 | | description | None | | encrypted | True | | id | d0db26bb-449d-4111-a59a-6fbb080bb483 | | metadata | | | migration_status | None | | multiattach | False | | name | None | | os-vol-host-attr:host | centstack.localdomain@nfs#nfs | | os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | 1a081dd2505547f5a8bb1a230f2295f4 | | replication_status | None | | size | 1 | | snapshot_id | None | | source_volid | None | | status | available | | updated_at | 2018-10-12T19:05:13.000000 | | user_id | ad9fe430b3a6416f908c79e4de3bfa98 | | volume_image_metadata | checksum : f8ab98ff5e73ebab884d80c9dc9c7290 | | | container_format : bare | | | disk_format : qcow2 | | | image_id : 154d4d4b-12bf-41dc-b7c4-35e5a6a3482a | | | image_name : cirros-0.3.5-x86_64-disk | | | min_disk : 0 | | | min_ram : 0 | | | signature_verified : False | | | size : 13267968 | | volume_type | nfs | +--------------------------------+-------------------------------------------------+",
"sudo crudini --get /var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf glance verify_glance_signatures",
"sudo crudini --get /var/lib/config-data/puppet-generated/glance_api/etc/glance/glance-api.conf key_manager backend castellan.key_manager.barbican_key_manager.BarbicanKeyManager",
"openssl genrsa -out private_key.pem 1024 openssl rsa -pubout -in private_key.pem -out public_key.pem openssl req -new -key private_key.pem -out cert_request.csr openssl x509 -req -days 14 -in cert_request.csr -signkey private_key.pem -out x509_signing_cert.crt",
"source ~/overcloudrc openstack secret store --name signing-cert --algorithm RSA --secret-type certificate --payload-content-type \"application/octet-stream\" --payload-content-encoding base64 --payload \"USD(base64 x509_signing_cert.crt)\" -c 'Secret href' -f value https://192.168.123.170:9311/v1/secrets/5df14c2b-f221-4a02-948e-48a61edd3f5b",
"openssl dgst -sha256 -sign private_key.pem -sigopt rsa_padding_mode:pss -out cirros-0.4.0.signature cirros-0.4.0-x86_64-disk.img",
"base64 -w 0 cirros-0.4.0.signature > cirros-0.4.0.signature.b64",
"cirros_signature_b64=USD(cat cirros-0.4.0.signature.b64)",
"openstack image create --container-format bare --disk-format qcow2 --property img_signature=\"USDcirros_signature_b64\" --property img_signature_certificate_uuid=\"5df14c2b-f221-4a02-948e-48a61edd3f5b\" --property img_signature_hash_method=\"SHA-256\" --property img_signature_key_type=\"RSA-PSS\" cirros_0_4_0_signed --file cirros-0.4.0-x86_64-disk.img +--------------------------------+----------------------------------------------------------------------------------+ | Property | Value | +--------------------------------+----------------------------------------------------------------------------------+ | checksum | None | | container_format | bare | | created_at | 2018-01-23T05:37:31Z | | disk_format | qcow2 | | id | d3396fa0-2ea2-4832-8a77-d36fa3f2ab27 | | img_signature | lcI7nGgoKxnCyOcsJ4abbEZEpzXByFPIgiPeiT+Otjz0yvW00KNN3fI0AA6tn9EXrp7fb2xBDE4UaO3v | | | IFquV/s3mU4LcCiGdBAl3pGsMlmZZIQFVNcUPOaayS1kQYKY7kxYmU9iq/AZYyPw37KQI52smC/zoO54 | | | zZ+JpnfwIsM= | | img_signature_certificate_uuid | ba3641c2-6a3d-445a-8543-851a68110eab | | img_signature_hash_method | SHA-256 | | img_signature_key_type | RSA-PSS | | min_disk | 0 | | min_ram | 0 | | name | cirros_0_4_0_signed | | owner | 9f812310df904e6ea01e1bacb84c9f1a | | protected | False | | size | None | | status | queued | | tags | [] | | updated_at | 2018-01-23T05:37:31Z | | virtual_size | None | | visibility | shared | +--------------------------------+----------------------------------------------------------------------------------+",
"2018-05-24 12:48:35.256 1 INFO nova.image.glance [req-7c271904-4975-4771-9d26-cbea6c0ade31 b464b2fd2a2140e9a88bbdacf67bdd8c a3db2f2beaee454182c95b646fa7331f - default default] Image signature verification succeeded for image d3396fa0-2ea2-4832-8a77-d36fa3f2ab27",
"openstack image save --file <local-file-name> <image-name>",
"openstack image set --property img_signature=\"USDcirros_signature_b64\" --property img_signature_certificate_uuid=\"5df14c2b-f221-4a02-948e-48a61edd3f5b\" --property img_signature_hash_method=\"SHA-256\" --property img_signature_key_type=\"RSA-PSS\" <image_id_of_the_snapshot>",
"rm <local-file-name>"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/manage_secrets_with_openstack_key_manager/assembly-encrypting-validating-openstack-services_rhosp
|
Chapter 1. Installation overview
|
Chapter 1. Installation overview Red Hat Enterprise Linux AI is distributed and installable as a bootable image. This bootable image includes a container that hold various software and tools for RHEL AI. Each image is compiled to support specific hardware vendors. Each RHEL AI image includes: Red Hat Enterprise Linux 9.4: A RHEL version 9.4 operating system (OS) for your machine. The InstructLab container: Contains InstructLab and various other tools required for RHEL AI. This includes: Python version 3.11: A Python 3.11 installation used internally by InstructLab. The InstructLab tools: The InstructLab command line interface (CLI). The LAB enhanced method of synthetic data generation (SDG). The LAB enhanced method of single and multi-phase training. InstructLab with vLLM: A high-input inference and serving engine for Large Language models (LLMs). InstructLab with DeepSpeed: A hardware optimization software that speeds up the training process. Similar functionalities of FSDP. InstructLab with FSDP: A training framework that makes training faster and more efficient. Similar functionalities of DeepSpeed Red Hat Enterprise Linux AI version 1.2 also includes a sample taxonomy tree with example skills and knowledge that you can download and use for training a model. Current installation options for Red Hat Enterprise Linux AI Installing on bare metal Installing on AWS Installing on IBM Cloud Installing on GCP (technology preview) Installing on Azure (technology preview) After installation with Red Hat Enterprise Linux AI general availability, you can manually download open source Granite LLMs that you can chat and interact with. For more information about downloading these models, see Downloading additional models .
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.2/html/installing/installing_overview
|
Chapter 4. Setting up IdM Replicas
|
Chapter 4. Setting up IdM Replicas Replicas are essentially clones of existin Identity Management servers, and they share identical core configuration. The replica installation process, then, has two major parts: copying the existing, required server configuration and then installing the replica based on that information. 4.1. Planning the Server/Replica Topologies In the IdM domain, there are three types of machines: Servers, which manage all of the services used by domain members Replicas, which are essentially copies of servers (and, once copied, are identical to servers) Clients, which belong to the Kerberos domains, receive certificates and tickets issued by the servers, and use other centralized services for authentication and authorization A replica is a clone of a specific IdM server. The server and replica share the same internal information about users, machines, certificates, and configured policies. These data are copied from the server to the replica in a process called replication . The two Directory Server instances used by an IdM server - the Directory Server instance used by the IdM server as a data store and the Directory Server instance used by the Dogtag Certificate System to store certificate information - are replicated over to corresponding consumer Directory Server instances used by the IdM replica. The different Directory Server instances recognize each other through replication agreements . An initial replication agreement is created between a master server and replica when the replica is created; additional agreements can be added to other servers or replicas using the ipa-replica-manage command. Figure 4.1. Server and Replica Agreements Once they are installed, replicas are functionally identical to servers. There are some guidelines with multi-master replication which place restrictions on the overall server/replica topology. No more than four replication agreements can be configured on a single server/replica. No more than 20 servers and replicas should be involved in a single Identity Management domain. Every server/replica should have a minimum of two replication agreements to ensure that there are no orphan servers or replicas cut out of the IdM domain if another server fails. One of the most resilient topologies is to create a cell configuration for the servers/replicas, where there are a small number of servers in a cell which all have replication agreements with each other (a tight cell), and then each server has one replication agreement with another server outside the cell, loosely coupling that cell to every other cell in the overall domain. Figure 4.2. Example Topology There are some recommendations on how to accomplish this easily: Have at least one IdM server in each main office, data center, or locality. Preferably, have two IdM servers. Do not have more than four servers per data center. Rather than using a server or replica, small offices can use SSSD to cache credentials and use an off-site IdM server as its data backend.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/identity_management_guide/Setting_up_IPA_Replicas
|
8.92. ipa
|
8.92. ipa 8.92.1. RHBA-2014:1383 - ipa bug fix and enhancement update Updated ipa packages that fix multiple bugs and add two enhancements are now available for Red Hat Enterprise Linux 6. Red Hat Identity Management is a centralized authentication, identity management, and authorization solution for both traditional and cloud-based enterprise environments. It integrates components of the Red Hat Directory Server, MIT Kerberos, Red Hat Certificate System, NTP, and DNS. It provides web browser and command-line interfaces. Its administration tools allow an administrator to quickly install, set up, and administer a group of domain controllers to meet the authentication and identity management requirements of large-scale Linux and UNIX deployments. Bug Fixes BZ# 1034478 Previously, the ipa-replica-install script tried to add the "A" and "PTR" records if the master managed Domain Name System (DNS). If the master did not manage the replica's zone, an error message "DNS zone not found" was returned, and the installation of a replica failed. With this update, the ipa-replica-install script has been fixed to properly handle the described situation, and the replica's installation now succeeds. Please note that the "A" and "PTR" records for the replica need to be added manually. BZ# 1083878 Previously, when Identity Management Public Key Infrastructure (PKI) clone in Red Hat Enterprise Linux 7 was being installed, an access to the /ca/ee/ca/profileSubmit URI on the Identity Management server, from which it was replicating, was required. However, Identity Management in Red Hat Enterprise Linux 6 did not export this URI in the httpd proxy configuration. As a consequence, the installation of Identity Management replica with the PKI component in Red Hat Enterprise Linux 7 failed when installed against a Red Hat Enterprise Linux 6 master. With this update, the /ca/ee/ca/profileSubmit URI has been added to Red Hat Enterprise Linux 6 Identity Management proxy configuration and a replica installation now succeeds in this scenario. BZ# 1022199 Prior to this update, disabling a sudo rule did not trigger the removal of its entry from the sudo compat tree in Lightweight Directory Access Protocol (LDAP). Consequently, the disabled sudo rules were still followed on clients using the sudo compat tree. This bug has been fixed, and the described problem no longer occurs. BZ# 1029921 Previously, an Identity Management password policy was not applied to passwords changed using the Directory Manager or PassSync agent. As a consequence, the default expiration time (90 days) was always applied even if the Identity Management administrator had defined a different policy. The Identity Management Password Change Extended Operation Plug-in has been updated, and the password changes made by the Directory Manager or PassSync agent now respect the "max lifetime" field of the user password policy. BZ# 905064 Previously, an intermittent race condition happened when the ipa-server-install utility tried to read the "preop.pin" value from the CS.cfg file, which was still unwritten to the disk by the pkicreate utility. As a consequence, the Identity Management server installation failed. With this update, ipa-server-install has been modified to anticipate such a race condition. Now, when ipa-server-install is unable to read from CS.cfg, it waits until it times out or the file is written to the disk. Additionally, these events are now properly logged in the installation log if they occur. BZ# 1040009 Prior to this update, a bug in the Python readline module caused a stray escape sequence to be prepended to the output of the script that the certmonger utility uses to acquire renewed certificates on the Certification Authority (CA) clones. Consequently, certmonger failed to parse the output of the script and the certificate was not renewed. A patch has been provided to address this bug and certmonger is now able to successfully parse the output of the script and complete the certificate renewal. BZ# 1082590 The ipa-client-automount utility uses the Remote Procedure Call (RPC) interface to validate the automount location. Previously, the RPC interface only allowed clients whose API version was earlier than or the same as the server API version to validate the automount location. As a consequence, running ipa-client-automount with a client whose API version was later than the server's failed with an incompatibility error message. With this update, ipa-client-automount has been modified to report a fixed API version in the RPC call and ipa-client-automount now runs successfully when the client API version is later than the server's. BZ# 1016042 Previously, the ipa-replica-manage utility contained a bug in the re-initialize command causing the MemberOf task to fail with an error message under certain circumstances. Consequently, when the ipa-replica-manage re-initialize command was run for a Windows Synchronization (WinSync) replication agreement, it succeeded in the re-initialization part, but failed during execution of the MemberOf task which was run after the re-initialization part. The following error message was returned: However, the error was harmless as running the MemberOf task was not required in this case. A patch has been applied and the error message is no longer returned in the described scenario. BZ# 1088772 Users in Identity Management in Red Hat Enterprise Linux 7 can be added without the password policy explicitly defined in the "krbPwdPolicyReference" attribute in the user object. The User Lockout plug-in locks out users authenticating or binding through the LDAP interface after configured number of failed attempts. In Identity Management in Red Hat Enterprise Linux 7, the plug-in does not require this attribute to be present to correctly apply the lock-out policy. Previously, the Identity Management User Lockout plug-in in Red Hat Enterprise Linux 6 required this attribute to function properly. Consequently, the password lock-out policy was not applied to users created in Identity Management in Red Hat Enterprise Linux 7 that were replicated to Red Hat Enterprise Linux 6. Such users had an unlimited number of authentication attempts in the LDAP interface. The User Lockout plug-in has been updated to respect users without the defined custom policy and to properly fall back to the defined global password policy, and now only a defined number of authentication attempts are allowed to users in the LDAP interface. BZ# 1095250 Previously, the validator in Identity Management did not allow slash characters in the DNS names. As a consequence, it was not possible to add reverse zones in the classless form. With this update, the DNS name validators allow slash characters where necessary, and thus the recommendations of RFC 2317 are now followed. BZ# 1108661 Prior to this update, Identity Management installers could call the ldapmodify utility without explicitly specifying the authentication method. Consequently, the installer could fail when the authentication method was set in the ldapmodify user configuration. This bug has been fixed, the installer now always calls ldapmodify with the authentication method explicitly specified, and the described problem no longer occurs. BZ# 1109050 Previously, when a Red Hat Enterprise Linux 6 master was being installed or upgraded, an extra default value was added to the "nsDS5ReplicaId" attribute of the LDAP entry "cn=replication,cn=etc". In Red Hat Enterprise Linux 7, Identity Management uses a stricter validation, which prevents installing a replica on such a system. As a consequence, after a Red Hat Enterprise Linux 6 master was installed or upgraded on a system with more than one master, installing a Red Hat Enterprise Linux 7 replica failed. This bug has been fixed, the extra value is no longer added, and Red Hat Enterprise Linux 7 replicas can be installed successfully in this scenario. BZ# 1015481 Identity Management administration framework API contains two checks on the server side to verify that a request on its API can be passed further: A check to see if the client API version is not higher than the server API version. If it is, the request is rejected. A check to see if the client API request does not use an attribute or a parameter unknown to the server. If it does, the request is rejected. Prior to this update, the Identity Management server performed the checks in an incorrect order. First, the attribute and parameter check was done, then the API version check. As a consequence, when a client (for example, Red Hat Enterprise Linux 6.5) ran the ipa administration utility against a server with an earlier operating system (for instance, Red Hat Enterprise Linux 6.4), the command returned a confusing error message. For example, instead of stating API incompatibility, an error message regarding an unknown option was displayed. This bug has been fixed, the checks on the server are now performed in the correct order and a correct error message is displayed in this scenario. Enhancements BZ# 1111121 Automated configuration of the sudo command has been added to the ipa-client-install utility. By default, ipa-client-install now configures sudo on Identity Management clients by leveraging the newly-added ipa provider in the sssd utility. BZ# 1095333 A set of Apache modules has been added to Red Hat Enterprise Linux 6.6 as a Technology Preview. The Apache modules can be used by external applications to achieve tighter interaction with Identity Management beyond simple authentication. Users of ipa are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
|
[
"Update succeeded Can't contact LDAP server"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/ipa
|
Chapter 2. How Red Hat JBoss Enterprise Application Platform Handles Security out of the Box
|
Chapter 2. How Red Hat JBoss Enterprise Application Platform Handles Security out of the Box There are three components that ship with JBoss EAP that relate to security: The Elytron Subsystem , introduced in JBoss EAP 7.1 Core Management Authentication The Security Subsystem These components are based on the general security concepts discussed in the Overview of General Security Concepts , but they also incorporate some JBoss EAP-specific concepts in their implementation. 2.1. Core Services, Subsystems, and Profiles JBoss EAP is built on the concept of modular class loading. Each API or service provided by JBoss EAP is implemented as a module, which is loaded and unloaded on demand. The core services are services that are always loaded on server startup and are required to be running prior to starting an additional subsystem. A subsystem is a set of capabilities added to the core server by an extension. For example, different subsystems handle servlet processing, manage the Jakarta Enterprise Beans container, and provide Jakarta Transactions support. A profile is a named list of subsystems, along with the details of each subsystem's configuration. A profile with a large number of subsystems results in a server with a large set of capabilities. A profile with a small, focused set of subsystems will have fewer capabilities but a smaller footprint. By default, JBoss EAP comes with several predefined profiles, for example default , full , ha , full-ha . In these profiles, the management interfaces and the associated security realms are loaded as core services. 2.2. Management Interfaces JBoss EAP offers two main management interfaces for interacting with and editing its configuration: the management console and the management CLI. Both interfaces expose the functionality of the core management of JBoss EAP. These interfaces offer two ways to access the same core management system. The management console is a web-based administration tool for JBoss EAP. It may be used to start and stop servers, deploy and undeploy applications, tune system settings, and make persistent modifications to the server configuration. The management console also has the ability to perform administrative tasks, with live notifications when any changes require the server instance to be restarted or reloaded. In a managed domain, server instances and server groups in the same domain can be centrally managed from the management console of the domain controller. The management CLI is a command-line administration tool for JBoss EAP. The management CLI may be used to start and stop servers, deploy and undeploy applications, configure system settings, and perform other administrative tasks. Operations can be performed in batch mode, allowing multiple tasks to be run as a group. The management CLI may also connect to the domain controller in a managed domain to execute management operations on the domain. The management CLI can perform all tasks that the web-based administration tool can perform as well as many other lower-level operations that are unavailable to the web-based administration tool. Note In addition to the clients that ship with JBoss EAP, other clients can be written to invoke the management interfaces over either the HTTP or native interfaces using the APIs included with JBoss EAP. 2.3. Jakarta Management Jakarta Management provides a way to remotely trigger JDK and application management operations. The management API of JBoss EAP is exposed as Jakarta Management managed beans. These managed beans are referred to as core MBeans, and access to them is controlled and filtered exactly the same as the underlying management API itself. In addition to the management CLI and management console, Jakarta Management-exposed beans are an alternative mechanism to access and perform management operations. 2.4. Role-Based Access Control Role-Based Access Control (RBAC) is a mechanism for specifying a set of permissions for management users. It allows multiple users to share responsibility for managing JBoss EAP servers without each of them requiring unrestricted access. By providing a separation of duties for management users, JBoss EAP makes it easy for an organization to spread responsibility between individuals or groups without granting unnecessary privileges. This ensures the maximum possible security of your servers and data while still providing flexibility for configuration, deployment, and management. RBAC in JBoss EAP works through a combination of role permissions and constraints. Seven predefined roles are provided, with every role having different fixed permissions. Each management user is assigned one or more roles that specify what the user is permitted to do when managing the server. RBAC is disabled by default for JBoss EAP. Standard Roles JBoss EAP provides seven predefined user roles: Monitor , Operator , Maintainer , Deployer , Auditor , Administrator , and SuperUser . Each role has a different set of permissions and is designed for specific use cases. The Monitor , Operator , Maintainer , Administrator , and SuperUser roles build successively upon each other, with each having more permissions than the . The Auditor and Deployer roles are similar to the Monitor and Maintainer roles, respectively, but they have some special permissions and restrictions. Monitor Users of the Monitor role have the fewest permissions and can only read the current configuration and state of the server. This role is intended for users who need to track and report on the performance of the server. Monitor cannot modify server configuration, nor can they access sensitive data or operations. Operator The Operator role extends the Monitor role by adding the ability to modify the runtime state of the server. This means that Operator can reload and shutdown the server as well as pause and resume Jakarta Messaging destinations. The Operator role is ideal for users who are responsible for the physical or virtual hosts of the application server so they can ensure that servers can be shutdown and restarted correctly when need be. Operator cannot modify server configuration or access sensitive data or operations. Maintainer The Maintainer role has access to view and modify the runtime state and all configurations except sensitive data and operations. The Maintainer role is the general purpose role that does not have access to sensitive data and operation. The Maintainer role allows users to be granted almost complete access to administer the server without giving those users access to passwords and other sensitive information. Maintainer cannot access sensitive data or operations. Administrator The Administrator role has unrestricted access to all resources and operations on the server except the audit logging system. The Administrator role has access to sensitive data and operations. This role can also configure the access control system. The Administrator role is only required when handling sensitive data or configuring users and roles. Administrator cannot access the audit logging system and cannot change themselves to the Auditor or SuperUser role. SuperUser The SuperUser role does not have any restrictions, and it has complete access to all resources and operations of the server, including the audit logging system. If RBAC is disabled, all management users have permissions equivalent to the SuperUser role. Deployer The Deployer role has the same permissions as the Monitor , but it can modify the configuration and state for deployments and any other resource type enabled as an application resource. Auditor The Auditor role has all the permissions of the Monitor role and can also view, but not modify, sensitive data. It has full access to the audit logging system. The Auditor role is the only role besides SuperUser that can access the audit logging system. Auditor cannot modify sensitive data or resources. Only read access is permitted. Permissions Permissions determine what each role can do; not every role has every permission. Notably, SuperUser has every permission and Monitor has the least. Each permission can grant read and/or write access for a single category of resources. The categories are runtime state, server configuration, sensitive data, the audit log, and the access control system. Table 2.1. Permissions of Each Role for Monitor, Operator, Maintainer and Deployer Monitor Operator Maintainer Deployer Read Config and State ✘ ✘ ✘ ✘ Read Sensitive Data 2 Modify Sensitive Data 2 Read/Modify Audit Log Modify Runtime State ✘ ✘ ✘ 1 Modify Persistent Config ✘ ✘ 1 Read/Modify Access Control 1 Permissions are restricted to application resources. Table 2.2. Permissions of Each Role for Auditor, Administration and SuperUser Auditor Administrator SuperUser Read Config and State ✘ ✘ ✘ Read Sensitive Data 2 ✘ ✘ ✘ Modify Sensitive Data 2 ✘ ✘ Read/Modify Audit Log ✘ ✘ Modify Runtime State ✘ ✘ Modify Persistent Config ✘ ✘ Read/Modify Access Control ✘ ✘ 2 Which resources are considered to be sensitive data are configured using sensitivity. Constraints Constraints are named sets of access-control configuration for a specified list of resources. The RBAC system uses the combination of constraints and role permissions to determine if any specific user can perform a management action. Constraints are divided into three classifications. Application Constraints Application constraints define sets of resources and attributes that can be accessed by Deployer users. By default, the only enabled application constraint is core, which includes deployments and deployment overlays. Application constraints are also included, but not enabled by default, for data sources, logging, mail, messaging, naming, resource adapters, and security. These constraints allow Deployer users to not only deploy applications, but also configure and maintain the resources that are required by those applications. Sensitivity Constraints Sensitivity constraints define sets of resources that are considered sensitive. A sensitive resource is generally one that is either secret, like a password, or one that will have serious impact on the operation of the server, like networking, JVM configuration, or system properties. The access control system itself is also considered sensitive. The only roles permitted to write to sensitive resources are Administrator and SuperUser. The Auditor role is only able to read sensitive resources. No other roles have access. Vault Expression Constraint The vault expression constraint defines if reading and writing vault expressions are considered sensitive operations. By default, reading and writing vault expressions are sensitive operations. 2.4.1. Configuring RBAC If RBAC is already enabled, you must have the SuperUser or Administrator role assigned to make configuration changes at the user or group level. Procedure Enable RBAC using the following command: As a SuperUser or an Administrator of JBoss EAP, configure RBAC: To add one of the supported roles, such as the Monitor role that has read-only access, use the following command: Note For more information about the Monitor role and other supported roles that you can add, see Role-Based Access Control . To add a user to a specific role, such as the Monitor role, use the following command: To add a group to a specific role, such as the Monitor role, use the following command: To exclude users or groups from a specific role, use the following command: Restart the server or the host to enable it to operate with RBAC configuration: To restart the host machine, use the following command: To restart the server in the standalone mode, use the following command: 2.5. Declarative Security and Jakarta Authentication Declarative security is a method to separate security concerns from application code by using the container to manage security. The container provides an authorization system based on either file permissions or users, groups, and roles. This approach is usually superior to programmatic security, which gives the application itself all of the responsibility for security. JBoss EAP provides declarative security by using security domains in the security subsystem. Jakarta Authentication is a declarative security API comprising a set of Java packages designed for user authentication and authorization. The API is a Java implementation of the standard Pluggable Authentication Modules (PAM) framework. It extends the Jakarta EE access control architecture to support user-based authorization. The JBoss EAP security subsystem is actually based on the Jakarta Authentication API. Because Jakarta Authentication is the foundation for the security subsystem, authentication is performed in a pluggable fashion. This permits Java applications to remain independent from underlying authentication technologies, such as Kerberos or LDAP, and allows the security manager to work in different security infrastructures. Integration with a security infrastructure is achievable without changing the security manager implementation. Only the configuration of the authentication stack that Jakarta Authentication uses needs to be changed. 2.6. Elytron Subsystem The elytron subsystem was introduced in JBoss EAP 7.1. It is based on the WildFly Elytron project, which is a security framework used to unify security across the entire application server. The elytron subsystem enables a single point of configuration for securing both applications and the management interfaces. WildFly Elytron also provides a set of APIs and SPIs for providing custom implementations of functionality and integrating with the elytron subsystem. In addition, there are several other important features of WildFly Elytron: Stronger authentication mechanisms for HTTP and SASL authentication. Improved architecture that allows for SecurityIdentities to be propagated across security domains. This ensures transparent transformation that is ready to be used for authorization. This transformation takes place using configurable role decoders, role mappers, and permission mappers. Centralized point for SSL/TLS configuration including cipher suites and protocols. SSL/TLS optimizations such as eager SecureIdentity construction and closely tying authorization to establishing an SSL/TLS connection. Eager SecureIdentity construction eliminates the need for a SecureIdentity to be constructed on a per-request basis. Closely tying authentication to establishing an SSL/TLS connection enables permission checks to happen BEFORE the first request is received. A secure credential store that replaces the vault implementation to store plain text strings. The new elytron subsystem exists in parallel to the legacy security subsystem and legacy core management authentication. Both the legacy and Elytron methods can be used for securing the management interfaces as well as providing security for applications. Important The architectures of Elytron and the legacy security subsystem that is based on PicketBox are very different. With Elytron, an attempt was made to create a solution that allows you to operate in the same security environments in which you currently operate; however, this does not mean that every PicketBox configuration option has an equivalent configuration option in Elytron. If you are not able to find information in the documentation to help you achieve similar functionality using Elytron that you had when using the legacy security implementation, you can find help in one of the following ways. If you have a Red Hat Development subscription , you have access to Support Cases , Solutions , and Knowledge Articles on the Red Hat Customer Portal. You can also open a case with Technical Support and get help from the WildFly community as described below. If you do not have a Red Hat Development subscription, you can still access Knowledge Articles on the Red Hat Customer Portal. You can also join the user forums and live chat to ask questions of the WildFly community. The WildFly community offerings are actively monitored by the Elytron engineering team. 2.6.1. Core Concepts and Components The concept behind the architecture and design of the elytron subsystem is using smaller components to assemble a full security policy. By default, JBoss EAP provides implementations for many components, but the elytron subsystem also allows you to provide specialized, custom implementations. Each implementation of a component in the 'elytron' subsystem is handled as an individual capability. This means that different implementations can be mixed, matched and modeled using distinct resources. 2.6.1.1. Capabilities and Requirements A capability is a piece of functionality used in JBoss EAP and is exposed using the management layer. One capability can depend on other capabilities and this dependency is mediated by the management layer. Some capabilities are provided automatically by JBoss EAP, but the full set of available capabilities available at runtime are determined using the JBoss EAP configuration. The management layer validates that all capabilities required by other capabilities are present during server startup as well as when any configuration changes are made. Capabilities integrate with JBoss Modules and extensions, but they are all distinct concepts. In addition to registering other capabilities it depends on, a capability must also register a set of requirements related to those capabilities. A capability can specify the following types of requirements: Hard requirements A capability is depended on another capability to function, therefore it must always be present. Optional requirements An optional aspect of a capability depends on another capability, which can or can not be enabled. Therefore the requirement cannot be determined or known until the configuration is analyzed. Runtime-only requirements A capability will check if the required capability exists at runtime. If the required capability is present it will be used. If the required capability is not present, it will not be used. You can find more information on capabilities and requirements in the WildFly documentation . 2.6.1.2. APIs, SPIs and Custom Implementations Elytron provides a set of security APIs and SPIs so that other subsystems and consumers can use them directly, which reduces integration overhead. While the majority of users will use the provided functionality of JBoss EAP, the Elytron APIs and SPIs can also be used by custom implementations to replace or extend Elytron functionality. 2.6.1.3. Security Domains A security domain is the representation of a security policy which is backed by one or more security realms and a set of resources that perform transformations. A security domain produces a SecurityIdentity . The SecurityIdentity is used by other resources that perform authorizations, such as an application. A SecurityIdentity is the representation of the current user, which is based on the raw AuthorizationIdentity and its associated roles and permissions. You can also configure a security domain to allow inflow of a SecurityIdentity from another security domain. When an identity is inflowed , it retains its original raw AuthorizationIdentity , and a new set of roles and permissions are assigned to it, creating a new SecurityIdentity . Important Deployments are limited to using one Elytron security domain per deployment. Scenarios that may have required multiple legacy security domains can now be accomplished using a single Elytron security domain. 2.6.1.4. Security Realms Security realms provide access to an identity store and are used to obtain credentials. These credentials allow authentication mechanisms to obtain the raw AuthorizationIdentity for performing authentication. They also allow authentication mechanisms to perform verification when doing validation of evidence. You can associate one or more security realms with a security domain. Some security realm implementations also expose an API for modifications, meaning the security realm can make updates to the underlying identity store. 2.6.1.5. Role Decoders A role decoder is associated with a security domain and is used to decode the current user's roles. The role decoder takes the raw AuthorizationIdentity returned from the security realm and converts its attributes into roles. 2.6.1.6. Role Mappers A role mapper applies a role modification to an identity. This can range from normalizing the format of the roles to adding or removing specific roles. A role mapper can be associated with both security realms as well as security domains. In cases where a role mapper is associated with a security realm, the role mapping will be applied at the security realm level before any transformations, such as role decoding or additional role mapping, occur at the security domain level. If a role mapper and another transformation, such as a role decoder, are both configured in a security domain, all other transformations are performed before the role mapper is applied. 2.6.1.7. Permission Mappers A permission mapper is associated with a security domain and assigns a set of permissions to a SecurityIdentity . 2.6.1.8. Principal Transformers A principal transformer can be used in multiple locations within the elytron subsystem. A principal transformer can transform or map a name to another name. 2.6.1.9. Principal Decoders A principal decoder can be used in multiple locations within the elytron subsystem. A principal decoder converts an identity from a Principal to a string representation of the name. For example, the X500PrincipalDecoder allows you to convert an X500Principal from a certificate's distinguished name to a string representation. 2.6.1.10. Realm Mappers A realm mapper is associated with a security domain and is used in cases where a security domain has multiple security realms configured. The realm mappers can be also associated with mechanism or mechanism-realm of http-authentication-factory and sasl-authentication-factory . The realm mapper uses the name provided during authentication to choose a security realm for authentication and obtaining the raw AuthorizationIdentity . 2.6.1.11. Authentication Factories An authentication factory is a representation of an authentication policy. An authentication is associated with security domain, mechanism factory, and a mechanism selector. The security domain provides the SecurityIdentity to be authenticated, the mechanism factory provides the server-side authentication mechanisms, and the mechanism selector is used to obtain configuration specific to the mechanism selected. The mechanism selector can include information about realm names a mechanism should present to a remote client plus additional principal transformers and realm mappers to use during the authentication process. 2.6.1.12. KeyStores A key-store is the definition of a keystore or truststore including the type of keystore, its location, and credential for accessing it. 2.6.1.13. Key Managers A key-manager references a key-store and is used in conjunction with an SSL context. 2.6.1.14. Trust Managers A trust-manager references as truststore, which is defined in a key-store , and is used in conjunction with an SSL context, usually for two-way SSL/TLS. 2.6.1.15. SSL Context The SSL context defined within the elytron subsystem is a javax.net.ssl.SSLContext meaning it can be used by anything that uses an SSL context directly. In addition to the usual configuration for an SSL context it is possible to configure additional items such as cipher suites and protocols. The SSL context will wrap any additional items that are configured. 2.6.1.16. Secure Credential Store The vault implementation used for plain text string encryption has been replaced with a newly designed credential store. In addition to the protection for the credentials it stores, the credential store is used to store plain text strings. 2.6.2. Elytron Authentication Process Multiple principal transformers, realm mappers, and a principal decoder can be defined within the elytron subsystem. The following sections discuss how these components function during the authentication process, and how principals are mapped to the appropriate security realm. When a principal is authenticated it performs the following steps, in order: The appropriate mechanism configuration is determined and configured. The incoming principal is mapped into a SecurityIdentity . This SecurityIdentity is used to determine the appropriate security realm. After the security realm has been identified the principal is transformed again. One final transformation occurs to allow for mechanism-specific transformations. The following image demonstrates these steps, highlighted in the left column, along with showing the components used in each phase. Figure 2.1. Elytron Authentication Process Pre-realm Mapping During pre-realm mapping the authenticated principal is mapped to a SecurityIdentity , a form that can identify which security realm should be used, and will contain a single Principal that represents the authenticated information. Principal transformers and principal decoders are called in the following order: Mechanism Realm - pre-realm-principal-transformer Mechanism Configuration - pre-realm-principal-transformer Security Domain - principal-decoder and pre-realm-principal-transformer If this procedure results in a null principal, then an error will be thrown and authentication will terminate. Figure 2.2. Pre-realm Mapping Realm Name Mapping Once a mapped principal has been obtained, a security realm is identified which will be used to load the identity. At this point the realm name is the name defined by the security realm as referenced by the security domain, and is not yet the mechanism realm name. The configuration will look for a security realm name in the following order: Mechanism Realm - realm-mapper Mechanism Configuration - realm-mapper Security Domain - realm-mapper If the RealmMapper returns null, or if no mapper is available, then the default-realm on the security domain will be used. Figure 2.3. Realm Name Mapping Post-realm Mapping After a realm has been identified, the principal goes through another round of transformation. Transformers are called in the following order: Mechanism Realm - post-realm-principal-transformer Mechanism Configuration - post-realm-principal-transformer Security Domain - post-realm-principal-transformer If this procedure results in a null principal, then an error will be thrown and authentication will terminate. Figure 2.4. Post-realm Mapping Final Principal Transformation Finally, one last round of principal transformation occurs to allow for mechanism-specific transformations to be applied both before and after domain-specific transformations. If this stage is not required, then the same results can be obtained during the post-realm mapping stage. Transformers are called in the following order: Mechanism Realm - final-principal-transformer Mechanism Configuration - final-principal-transformer Realm Mapping - principal-transformer If this procedure results in a null principal, then an error will be thrown and authentication will terminate. Figure 2.5. Final Principal Transformation Obtain the Realm Identity After the final round of principal transformation, the security realm identified in realm name mapping is called to obtain a realm identity used to continue authentication. 2.6.3. HTTP Authentication Elytron provides a complete set of HTTP authentication mechanisms including BASIC , FORM , DIGEST , SPNEGO , and CLIENT_CERT . HTTP authentication is handled using the HttpAuthenticationFactory , which is both an authentication policy for using HTTP authentication mechanisms as well as factory for configured authentication mechanisms. The HttpAuthenticationFactory references the following: SecurityDomain The security domain that any mechanism authentication will be performed against. HttpServerAuthenticationMechanismFactory The general factory for server-side HTTP authentication mechanisms. MechanismConfigurationSelector You can use this to supply additional configuration for the authentication mechanisms. The purpose of the MechanismConfigurationSelector is to obtain configuration specific for the mechanism selected. This can include information about realm names a mechanism should present to a remote client, additional principal transformers, and realm mappers to use during the authentication process. 2.6.4. SASL Authentication SASL is a framework for authentication that separate the authentication mechanism itself from the protocol it uses. It also allows for additional authentication mechanisms such as DIGEST-MD5 , GSSAPI , OTP , and SCRAM . SASL authentication is not part of the Jakarta EE specification. SASL authentication is handled using the SaslAuthenticationFactory , which is both an authentication policy for using SASL authentication mechanisms as well as a factory for configured authentication mechanisms. The SaslAuthenticationFactory references the following: SecurityDomain The security domain that any mechanism authentication will be performed against. SaslServerFactory The general factory for server-side SASL authentication mechanisms. MechanismConfigurationSelector You can use this to supply additional configuration for the authentication mechanisms. The purpose of the MechanismConfigurationSelector is to obtain configuration specific for the mechanism selected. This can include information about realm names a mechanism should present to a remote client, additional principal transformers, and realm mappers to use during the authentication process. 2.6.5. Interaction between the Elytron Subsystem and Legacy Systems You can map some of the major components of both the legacy security subsystem components as well as the legacy core management authentication to Elytron capabilities. This allows those legacy components to be used in an Elytron based configuration and allow for an incremental migration from legacy components. 2.6.6. Resources in the Elytron Subsystem JBoss EAP provides a set of resources in the elytron subsystem: Factories Principal Transformers Principal Decoders Realm Mappers Realms Permission Mappers Role Decoders Role Mappers SSL Components Other Factories aggregate-http-server-mechanism-factory An HTTP server factory definition where the HTTP server factory is an aggregation of other HTTP server factories. aggregate-sasl-server-factory A SASL server factory definition where the SASL server factory is an aggregation of other SASL server factories. configurable-http-server-mechanism-factory An HTTP server factory definition that wraps another HTTP server factory and applies the specified configuration and filtering. configurable-sasl-server-factory A SASL server factory definition that wraps another SASL server factory and applies the specified configuration and filtering. custom-credential-security-factory A custom credential SecurityFactory definition. http-authentication-factory Resource containing the association of a security domain with a HttpServerAuthenticationMechanismFactory . For more information, see Configure Authentication with Certificates in How to Configure Identity Management for JBoss EAP. kerberos-security-factory A security factory for obtaining a GSSCredential for use during authentication. For more information, see Configure the Elytron Subsystem in How to Set Up SSO with Kerberos for JBoss EAP. mechanism-provider-filtering-sasl-server-factory A SASL server factory definition that enables filtering by provider where the factory was loaded using a provider. provider-http-server-mechanism-factory An HTTP server factory definition where the HTTP server factory is an aggregation of factories from the provider list. provider-sasl-server-factory A SASL server factory definition where the SASL server factory is an aggregation of factories from the provider list. sasl-authentication-factory Resource containing the association of a security domain with a SASL server factory. For more information, see Secure the Management Interfaces with a New Identity Store in How to Configure Server Security for JBoss EAP. service-loader-http-server-mechanism-factory An HTTP server factory definition where the HTTP server factory is an aggregation of factories identified using a ServiceLoader . service-loader-sasl-server-factory A SASL server factory definition where the SASL server factory is an aggregation of factories identified using a ServiceLoader . Principal Transformers aggregate-principal-transformer Individual transformers attempt to transform the original principal until one returns a non-null principal. chained-principal-transformer A principal transformer definition where the principal transformer is a chaining of other principal transformers. constant-principal-transformer A principal transformer definition where the principal transformer always returns the same constant. custom-principal-transformer A custom principal transformer definition. regex-principal-transformer A regular expression based principal transformer. regex-validating-principal-transformer A regular expression based principal transformer which uses the regular expression to validate the name. Principal Decoders aggregate-principal-decoder A principal decoder definition where the principal decoder is an aggregation of other principal decoders. concatenating-principal-decoder A principal decoder definition where the principal decoder is a concatenation of other principal decoders. constant-principal-decoder Definition of a principal decoder that always returns the same constant. custom-principal-decoder Definition of a custom principal decoder. x500-attribute-principal-decoder Definition of an X500 attribute based principal decoder. For more information, see Configure Authentication with Certificates in How to Configure Identity Management for JBoss EAP. x509-subject-alternative-name-evidence-decoder Evidence decoder to use a subject alternative name extension in an X.509 certificate as the principal. For more information, see Configuring Evidence Decoder for X.509 Certificate with Subject Alternative Name Extension in How to Configure Server Security for JBoss EAP. Realm Mappers constant-realm-mapper Definition of a constant realm mapper that always returns the same value. custom-realm-mapper Definition of a custom realm mapper. mapped-regex-realm-mapper Definition of a realm mapper implementation that first uses a regular expression to extract the realm name, this is then converted using the configured mapping of realm names. simple-regex-realm-mapper Definition of a simple realm mapper that attempts to extract the realm name using the capture group from the regular expression, if that does not provide a match then the delegate realm mapper is used instead. Realms aggregate-realm A realm definition that is an aggregation of two realms, one for the authentication steps and one for loading the identity for the authorization steps. Note The exported legacy security domain cannot be used as Elytron security realm for the authorization step of the aggregate-realm . caching-realm A realm definition that enables caching to another security realm. The caching strategy is Least Recently Used , where the least accessed entries are discarded when the maximum number of entries is reached. For more information, see Set Up Caching for Security Realms in How to Configure Identity Management for JBoss EAP. custom-modifiable-realm Custom realm configured as being modifiable will be expected to implement the ModifiableSecurityRealm interface. By configuring a realm as being modifiable management operations will be made available to manipulate the realm. custom-realm A custom realm definitions can implement either the s SecurityRealm interface or the ModifiableSecurityRealm interface. Regardless of which interface is implemented management operations will not be exposed to manage the realm. However other services that depend on the realm will still be able to perform a type check and cast to gain access to the modification API. filesystem-realm A simple security realm definition backed by the file system. For more information, see Configure Authentication with a Filesystem-Based Identity Store in How to Configure Identity Management for JBoss EAP. identity-realm A security realm definition where identities are represented in the management model. jdbc-realm A security realm definition backed by database using JDBC. For more information, see Configure Authentication with a Database-Based Identity Store in How to Configure Identity Management for JBoss EAP. key-store-realm A security realm definition backed by a keystore. For more information, see Configure Authentication with Certificates in How to Configure Identity Management for JBoss EAP. ldap-realm A security realm definition backed by LDAP. For more information, see Configure Authentication with a LDAP-Based Identity Store in How to Configure Identity Management for JBoss EAP. properties-realm A security realm definition backed by properties files. Configure Authentication with a Properties File-Based Identity Store in How to Configure Identity Management for JBoss EAP. token-realm A security realm definition capable of validating and extracting identities from security tokens. Permission Mappers custom-permission-mapper Definition of a custom permission mapper. logical-permission-mapper Definition of a logical permission mapper. simple-permission-mapper Definition of a simple configured permission mapper. constant-permission-mapper Definition of a permission mapper that always returns the same constant. Role Decoders custom-role-decoder Definition of a custom RoleDecoder. simple-role-decoder Definition of a simple RoleDecoder that takes a single attribute and maps it directly to roles. source-address-role-decoder Definition of a source-address-role-decoder that assigns roles to an identity based on the IP address of the client. aggregate-role-decoder Definition of an aggregate-role-decoder that aggregates the roles returned by two or more role decoders. For more information, see Configure Authentication with a Filesystem-Based Identity Store in How to Configure Identity Management for JBoss EAP. Role Mappers add-prefix-role-mapper A role mapper definition for a role mapper that adds a prefix to each provided. add-suffix-role-mapper A role mapper definition for a role mapper that adds a suffix to each provided. aggregate-role-mapper A role mapper definition where the role mapper is an aggregation of other role mappers. constant-role-mapper A role mapper definition where a constant set of roles is always returned. For more information, see Configure Authentication with Certificates in How to Configure Identity Management for JBoss EAP. custom-role-mapper Definition of a custom role mapper. logical-role-mapper A role mapper definition for a role mapper that performs a logical operation using two referenced role mappers. mapped-role-mapper A role mapper definition for a role mapper that uses preconfigured mapping of role names. regex-role-mapper A role mapper definition for a role mapper that uses a regular expression to translate roles. For example, you can map "app-admin", "app-operator" to "admin" and "operator" respectively. For more information, see regex-role-mapper Attributes . SSL Components client-ssl-context An SSLContext for use on the client side of a connection. For more information, see Using a client-ssl-context in How to Configure Server Security for JBoss EAP. filtering-key-store A filtering keystore definition, which provides a keystore by filtering a key-store . For more information, see Using a filtering-key-store in How to Configure Server Security for JBoss EAP. key-manager A key manager definition for creating the key manager list as used to create an SSL context. For more information, see Enable One-way SSL/TLS for the Management Interfaces Using the Elytron Subsystem in How to Configure Server Security for JBoss EAP. key-store A keystore definition. For more information, see Enable One-way SSL/TLS for the Management Interfaces Using the Elytron Subsystem in How to Configure Server Security for JBoss EAP. ldap-key-store An LDAP keystore definition, which loads a keystore from an LDAP server. For more information, see Using an ldap-key-store in How to Configure Server Security for JBoss EAP. server-ssl-context An SSL context for use on the server side of a connection. For more information, see Enable One-way SSL/TLS for the Management Interfaces Using the Elytron Subsystem in How to Configure Server Security for JBoss EAP. trust-manager A trust manager definition for creating the TrustManager list as used to create an SSL context. For more information, see Enable Two-way SSL/TLS for the Management Interfaces using the Elytron Subsystem in How to Configure Server Security for JBoss EAP. Other aggregate-providers An aggregation of two or more provider-loader resources. authentication-configuration An individual authentication configuration definition, which is used by clients deployed to JBoss EAP and other resources for authenticating when making a remote connection. authentication-context An individual authentication context definition, which is used to supply an ssl-context and authentication-configuration when clients deployed to JBoss EAP and other resources make a remoting connection. credential-store Credential store to keep alias for sensitive information such as passwords for external services. For more information, see Create a Credential Store in How to Configure Server Security for JBoss EAP. dir-context The configuration to connect to a directory (LDAP) server. For more information, see Using an ldap-key-store in How to Configure Server Security for JBoss EAP. provider-loader A definition for a provider loader. security-domain A security domain definition. For more information, see Configure Authentication with Certificates in How to Configure Identity Management for JBoss EAP. security-property A definition of a security property to be set. 2.7. Core Management Authentication Core management authentication is responsible for securing the management interfaces, HTTP and native, for the core management functionality using the ManagementRealm . It is built into the core management and is enabled and configured as a core service by default. It is only responsible for securing the management interfaces. 2.7.1. Security Realms A security realm is an identity store of usernames, passwords, and group membership information that can be used when authenticating users in Jakarta Enterprise Beans, web applications, and the management interface. Initially, JBoss EAP comes preconfigured with two security realms by default: ManagementRealm and ApplicationRealm . Both security realms use the file system to store mappings between users and passwords and users and group membership information. They both use a digest mechanism by default when authenticating. A digest mechanism is an authentication mechanism that authenticates the user by making use of one-time, one-way hashes comprising various pieces of information, including information stored in the usernames and passwords mapping property file. This allows JBoss EAP to authenticate users without sending any passwords in plain text over the network. The JBoss EAP installation contains a script that enables administrators to add users to both realms. When users are added in this way, the username and hashed password are stored in the corresponding usernames and passwords properties file. When a user attempts to authenticate, JBoss EAP sends back a one-time use number, nonce, to the client. The client then generates a one-way hash using its username, password, nonce, and a few other fields, and sends the username, nonce, and one-way hash to JBoss EAP. JBoss EAP looks up the user's prehashed password and uses it, along with the provided username, nonce, and a few other fields, to generate another one-way hash in the same manner. If all the same information is used on both sides, including the correct password, hashes will match and the user is authenticated. Although security realms use the digest mechanism by default, they may be reconfigured to use other authentication mechanisms. On startup, the management interfaces determine which authentication mechanisms will be enabled based on what authentication mechanisms are configured in ManagementRealm . Security realms are not involved in any authorization decisions; however, they can be configured to load a user's group membership information, which can subsequently be used to make authorization decisions. After a user has been authenticated, a second step occurs to load the group membership information based on the username. By default, the ManagementRealm is used during authentication and authorization for the management interfaces. The ApplicationRealm is a default realm made available for web applications and Jakarta Enterprise Beans to use when authenticating and authorizing users. 2.7.2. Default Security By default, the core management authentication secures each of the management interfaces, HTTP and native, in two different forms: local clients and remote clients, both of which are configured using the ManagementRealm security realm by default. These defaults may be configured differently or replaced entirely. Note Out of the box, the management interfaces are configured to use simple access controls, which does not use roles. As a result, all users by default, when using simple access controls, have the same privileges as the SuperUser role, which essentially has access to everything. 2.7.2.1. Local and Remote Client Authentication with the Native Interface The native interface, or management CLI, can be invoked locally on the same host as the running JBoss EAP instance or remotely from another machine. When attempting to connect using the native interface, JBoss EAP presents the client with a list of available SASL authentication mechanisms, for example, local jboss user , BASIC, etc. The client chooses its desired authentication mechanism and attempts to authenticate with the JBoss EAP instance. If it fails, it retries with any remaining mechanisms or stops attempting to connect. Local clients have the option to use the local jboss user authentication mechanism. This security mechanism is based on the client's ability to access the local file system. It validates that the user attempting to log in actually has access to the local file system on the same host as the JBoss EAP instance. This authentication mechanism happens in four steps: The client sends a message to the server that includes a request to authenticate using local jboss user . The server generates a one-time token, writes it to a unique file, and sends a message to the client with the full path of the file. The client reads the token from the file and sends it to the server, verifying that it has local access to the file system. The server verifies the token and then deletes the file. This form of authentication is based on the principle that if physical access to the file system is achieved, other security mechanisms are superfluous. The reasoning is that if a user has local file system access, that user has enough access to create a new user or otherwise thwart other security mechanisms put in place. This is sometimes referred to as silent authentication because it allows the local user to access the management CLI without username or password authentication. This functionality is enabled as a convenience and to assist local users running management CLI scripts without requiring additional authentication. It is considered a useful feature given that access to the local configuration typically also gives the user the ability to add their own user details or otherwise disable security checks. The native interface can also be accessed from other servers, or even the same server, as a remote client. When accessing the native interface as a remote client, clients will not be able to authenticate using local jboss user and will be forced to use another authentication mechanism, for example, DIGEST. If a local client fails to authenticate by using local jboss user , it will automatically fall back and attempt to use the other mechanisms as a remote client. Note The management CLI may be invoked from other servers, or even the same server, using the HTTP interface as opposed to the native interface. All HTTP connections, CLI or otherwise, are considered to be remote and NOT covered by local interface authentication. Important By default, the native interface is not configured, and all management CLI traffic is handled by the HTTP interface. JBoss EAP 7 supports HTTP upgrade, which allows a client to make an initial connection over HTTP but then send a request to upgrade that connection to another protocol. In the case of the management CLI, an initial request over HTTP to the HTTP interface is made, but then the connection is upgraded to the native protocol. This connection is still handled over the HTTP interface, but it is using the native protocol for communication rather than HTTP. Alternatively, the native interface may still be enabled and used if desired. 2.7.2.2. Local and Remote Client Authentication with the HTTP Interface The HTTP interface can be invoked locally by clients on the same host as the running JBoss EAP instance or remotely by clients from another machine. Despite allowing local and remote clients to access the HTTP interface, all clients accessing the HTTP interface are treated as remote connections. When a client attempts to connect to the HTTP management interfaces, JBoss EAP sends back an HTTP response with a status code of 401 Unauthorized , and a set of headers that list the supported authentication mechanisms, for example, Digest, GSSAPI, and so on. The header for Digest also includes the nonce generated by JBoss EAP. The client looks at the headers and chooses which authentication method to use and sends an appropriate response. In the case where the client chooses Digest, it prompts the user for their username and password. The client uses the supplied fields such as username and password, the nonce, and a few other pieces of information to generate a one-way hash. The client then sends the one-way hash, username, and nonce back to JBoss EAP as a response. JBoss EAP takes that information, generates another one-way hash, compares the two, and authenticates the user based on the result. 2.7.3. Advanced Security There are a number of ways to change the default configuration of management interfaces as well as the authentication and authorization mechanisms to affect how it is secured. 2.7.3.1. Updating the Management Interfaces In addition to modifying the Authentication and Authorization mechanisms, JBoss EAP allows administrators to update the configuration of the management interface itself. There are a number of options. Configuring the Management Interfaces to Use One-way SSL/TLS Configuring the JBoss EAP management console for communication only using one-way SSL/TLS provides increased security. All network traffic between the client and management console is encrypted, which reduces the risk of security attacks, such as a man-in-the-middle attack. Anyone administering a JBoss EAP instance has greater permissions on that instance than non-privileged users, and using one-way SSL/TLS helps protect the integrity and availability of that instance. When configuring one-way SSL/TLS with JBoss EAP, authority-signed certificates are preferred over self-signed certificates because they provide a chain of trust. Self-signed certificates are permitted but are not recommended. Using Two-way SSL/TLS Two-way SSL/TLS authentication, also known as client authentication, authenticates the client and the server using SSL/TLS certificates. This provides assurance that not only is the server what it says it is, but the client is also what it says it is. Updating or Creating a New Security Realm The default security realm can be updated or replaced with a new security realm. 2.7.3.2. Adding Outbound Connections Some security realms connect to external interfaces, such as an LDAP server. An outbound connection defines how to make this connection. A predefined connection type, ldap-connection , sets all of the required and optional attributes to connect to the LDAP server and verify the credential. 2.7.3.3. Adding RBAC to the Management Interfaces By default the RBAC system is disabled. It is enabled by changing the provider attribute from simple to rbac . This can be done using the management CLI. When RBAC is disabled or enabled on a running server, the server configuration must be reloaded before it takes effect. When RBAC is enabled for the management interfaces, the role assigned to a user determines the resources to which they have access and what operations they can conduct with a resource's attributes. Only users of the Administrator or SuperUser role can view and make changes to the access control system. Warning Enabling RBAC without having users and roles properly configured could result in administrators being unable to log in to the management interfaces. RBAC's Effect on the Management Console In the management console, some controls and views are disabled, which show up as grayed out, or not visible at all, depending on the permissions of the role the user has been assigned. If the user does not have read permissions to a resource attribute, that attribute will appear blank in the console. For example, most roles cannot read the username and password fields for data sources. If the user has read permissions but does not have write permissions to a resource attribute, that attribute will be disabled in the edit form for the resource. If the user does not have write permissions to the resource, the edit button for the resource will not appear. If a user does not have permissions to access a resource or attribute, meaning it is unaddressable for that role, it will not appear in the console for that user. An example of that is the access control system itself, which is only visible to a few roles by default. The management console also provides an interface for the following common RBAC tasks: View and configure what roles are assigned to, or excluded from, each user. View and configure what roles are assigned to, or excluded from, each group. View group and user membership per role. Configure default membership per role. Create a scoped role. Note Constraints cannot be configured in the management console at this time. RBAC's Effect on the Management CLI or Management API Users of the management CLI or management API will encounter slightly different behavior when RBAC is enabled. Resources and attributes that cannot be read are filtered from results. If the filtered items are addressable by the role, their names are listed as filtered-attributes in the response-headers section of the result. If a resource or attribute is not addressable by the role, it is not listed. Attempting to access a resource that is not addressable will result in a Resource Not Found error. If a user attempts to write or read a resource that they can address but lacks the appropriate write or read permissions, a Permission Denied error is returned. The management CLI can perform all of same RBAC tasks as the management console as well as a few additional tasks: Enable and disable RBAC Change permission combination policy Configuring application resource and resource sensitivity constraints RBAC's Effect on Jakarta Management Managed Beans Role-Based Access Control applies to Jakarta Management in three ways: The management API of JBoss EAP is exposed as Jakarta Management managed beans. These managed beans are referred to as core mbeans , and access to them is controlled and filtered exactly the same as the underlying management API itself. The jmx subsystem is configured with write permissions being sensitive. This means only users of the Administrator and SuperUser roles can make changes to that subsystem. Users of the Auditor role can also read this subsystem configuration. By default, managed beans registered by deployed applications and services, or non-core MBeans, can be accessed by all management users, but only users of the Maintainer , Operator , Administrator , and SuperUser roles can write to them. RBAC Authentication RBAC works with the standard authentication providers that are included with JBoss EAP: Username/Password Users are authenticated using a username and password combination that is verified according to the settings of the ManagementRealm , which has the ability to use a local properties file or LDAP. Client Certificate The truststore provides authentication information for client certificates. local jboss user The jboss-cli script authenticates automatically as local jboss user if the server is running on the same machine. By default local jboss user is a member of the SuperUser group. Regardless of which provider is used, JBoss EAP is responsible for assigning roles to users. When authenticating with the ManagementRealm or an LDAP server, those systems can supply user group information. This information can also be used by JBoss EAP to assign roles to users. 2.7.3.4. Using LDAP with the Management Interfaces JBoss EAP includes several authentication and authorization modules that allow an LDAP server to be used as the authentication and authorization authority for web and Jakarta Enterprise Beans applications. To use an LDAP directory server as the authentication source for the management console, management CLI, or management API, the following tasks must be performed: Create an outbound connection to the LDAP server. Create an LDAP-enabled security realm or update an existing security realm to use LDAP. Reference the new security realm in the management interface. The LDAP authenticator operates by first establishing a connection to the remote directory server. It then performs a search using the username, which the user passed to the authentication system, to find the fully qualified distinguished name (DN) of the LDAP record. A new connection to the LDAP server is established, using the DN of the user as the credential and password supplied by the user. If this authentication to the LDAP server is successful, the DN is verified as valid. Once an LDAP-enabled security realm is created, it can be referenced by the management interface. The management interface will use the security realm for authentication. JBoss EAP can also be configured to use an outbound connection to an LDAP server using two-way SSL/TLS for authentication in the management interface and management CLI. 2.7.3.5. Jakarta Authentication and the Management Interfaces Jakarta Authentication can be used to secure the management interfaces. When using Jakarta Authentication for the management interfaces, the security realm must be configured to use a security domain. This introduces a dependency between core services and the subsystems. While SSL/TLS is not required to use Jakarta Authentication to secure the management interfaces, it is recommended that administrators enable SSL/TLS to avoid accidentally transmitting sensitive information in an unsecured manner. Note When JBoss EAP instances are running in admin-only mode, using Jakarta Authentication to secure the management interfaces is not supported. For more information on admin-only mode, see Running JBoss EAP in Admin-only Mode in the JBoss EAP Configuration Guide . 2.8. Security Subsystem The security subsystem provides security infrastructure for applications and is based on the Jakarta Authentication API. The subsystem uses a security context associated with the current request to expose the capabilities of the authentication manager, authorization manager, audit manager, and mapping manager to the relevant container. The authentication and authorization managers handle authentication and authorization. The mapping manager handles adding, modifying, or deleting information from a principal, role, or attribute before passing the information to the application. The auditing manager allows users to configure provider modules to control the way that security events are reported. In most cases, administrators should need to focus only on setting up and configuring security domains in regards to updating the configuration of the security subsystem. Outside of security domains, the only security element that may need to be changed is deep-copy-subject-mode . See the Security Management section for more information on deep copy subject mode. 2.8.1. Security Domains A security domain is a set of Jakarta Authentication declarative security configurations that one or more applications use to control authentication, authorization, auditing, and mapping. Four security domains are included by default: jboss-ejb-policy , jboss-web-policy , other , and jaspitest . The jboss-ejb-policy and jboss-web-policy security domains are the default authorization mechanisms for the JBoss EAP instance. They are used if an application's configured security domain does not define any authentication mechanisms. Those security domains, along with other , are also used internally within JBoss EAP for authorization and are required for correct operation. The jaspitest security domain is a simple Jakarta Authentication security domain included for development purposes. A security domain comprises configurations for authentication, authorization, security mapping, and auditing. Security domains are part of the JBoss EAP security subsystem and are managed centrally by the domain controller or standalone server. Users can create as many security domains as needed to accommodate application requirements. You can also configure the type of authentication cache to be used by a security domain, using the cache-type attribute. If this attribute is removed, no cache will be used. The allowed values for this property are default or infinispan . Comparison Between Elytron and PicketBox Security Domains A deployment should be associated with either a single Elytron security domain or one or more legacy PicketBox security domain. A deployment should not be associated with both. That is an invalid configuration. An exception occurs if a deployment is associated with more than one Elytron security domain, whereas a deployment can be associated with multiple legacy security domains. Note When working with PicketBox, the security domain encapsulates both access to the underlying identity store and the mapping for authorization decisions. Thus, users of PicketBox with different stores are required to use different security domains for different sources. In Elytron, these two functions are separated. Access to the stores is handled by security realms and mapping for authorization is handled by security domains. So, a deployment requiring independent PicketBox security domains does not necessarily require independent Elytron security domains. 2.8.2. Using Security Realms and Security Domains Security realms and security domains can be used to secure web applications deployed to JBoss EAP. When deciding if either should be used, it is important to understand the difference between the two. Web applications and Jakarta Enterprise Beans deployments can only use security domains directly. They perform the actual authentication and authorization using login modules using the identity information passed from an identity store. Security domains can be configured to use security realms for identity information; for example, other allows applications to specify a security realm to use for authentication and getting authorization information. They can also be configured to use external identity stores. Web applications and Jakarta Enterprise Beans deployments cannot be configured to directly use security realms for authentication. The security domains are also part of the security subsystem and are loaded after core services. Only the core management, for example the management interfaces and the Jakarta Enterprise Beans remoting endpoints, can use the security realms directly. They are identity stores that provide authentication as well as authorization information. They are also a core service and are loaded before any subsystems are started. The out-of-the-box security realms, ManagementRealm and ApplicationRealm , use a simple file-based authentication mechanism, but they can be configured to use other mechanisms. 2.8.3. Security Auditing Security auditing refers to triggering events, such as writing to a log, in response to an event that happens within the security subsystem. Auditing mechanisms are configured as part of a security domain, along with authentication, authorization, and security mapping details. Auditing uses provider modules to control the way that security events are reported. JBoss EAP ships with several security auditing providers, but custom ones may be used. The core management of JBoss EAP also has its own security auditing and logging functionality, which is configured separately and is not part of the security subsystem. 2.8.4. Security Mapping Security mapping adds the ability to combine authentication and authorization information after the authentication or authorization happens but before the information is passed to your application. Roles for authorization, principals for authentication, or credentials which are attributes that are not principals or roles, may all be mapped. Role mapping is used to add, replace, or remove roles to the subject after authentication. Principal mapping is used to modify a principal after authentication. You can use credential mapping to convert attributes from an external system to be used by your application. You can also use credential mapping conversely, to convert attributes from your application for use by an external system. 2.8.5. Password Vault System JBoss EAP has a password vault to encrypt sensitive strings, store them in an encrypted keystore, and decrypt them for applications and verification systems. In plain text configuration files, such as XML deployment descriptors, it is sometimes necessary to specify passwords and other sensitive information. The JBoss EAP password vault can be used to securely store sensitive strings for use in plain text files. 2.8.6. Security Domain Configuration Security domains are configured centrally either at the domain controller or on the standalone server. When security domains are used, an application may be configured to use a security domain in lieu of individually configuring security. This allows users and administrators to leverage Declarative Security . Example One common scenario that benefits from this type of configuration structure is the process of moving applications between testing and production environments. If an application has its security individually configured, it may need to be updated every time it is promoted to a new environment, for example, from a testing environment to a production environment. If that application used a security domain instead, the JBoss EAP instances in the individual environments would have their security domains properly configured for the current environment, allowing the application to rely on the container to provide the proper security configuration, using the security domain. 2.8.6.1. Login Modules JBoss EAP includes several bundled login modules suitable for most user management roles that are configured within a security domain. The security subsystem offers some core login modules that can read user information from a relational database, an LDAP server, or flat files. In addition to these core login modules, JBoss EAP provides other login modules that provide user information and functionality for customized needs. Summary of Commonly Used Login Modules Ldap Login Module The Ldap login module is a login module implementation that authenticates against an LDAP server. The security subsystem connects to the LDAP server using connection information, that is, a bindDN that has permissions to search the baseCtxDN and rolesCtxDN trees for the user and roles, provided using a Java Naming and Directory Interface initial context. When a user attempts to authenticate, the LDAP login module connects to the LDAP server and passes the user's credentials to the LDAP server. Upon successful authentication, an InitialLDAPContext is created for that user within JBoss EAP, populated with the user's roles. LdapExtended Login Module The LdapExtended login module searches for the user as well as the associated roles to bind for authentication. The roles query recursively and follow DNs to navigate a hierarchical role structure. The login module options include whatever options are supported by the chosen LDAP Java Naming and Directory Interface provider supports. UsersRoles Login Module The UsersRoles login module is a simple login module that supports multiple users and user roles loaded from Java properties files. The primary purpose of this login module is to easily test the security settings of multiple users and roles using properties files deployed with the application. Database Login Module The Database login module is a JDBC login module that supports authentication and role mapping. This login module is used if username, password, and role information are stored in a relational database. This works by providing a reference to logical tables containing principals and roles in the expected format. Certificate Login Module The Certificate login module authenticates users based on X509 certificates. A typical use case for this login module is CLIENT-CERT authentication in the web tier. This login module only performs authentication and must be combined with another login module capable of acquiring authorization roles to completely define access to secured web or Jakarta Enterprise Beans components. Two subclasses of this login module, CertRolesLoginModule and DatabaseCertLoginModule , extend the behavior to obtain the authorization roles from either a properties file or database. Identity Login Module The Identity login module is a simple login module that associates a hard-coded username to any subject authenticated against the module. It creates a SimplePrincipal instance using the name specified by the principal option. This login module is useful when a fixed identity is required to be provided to a service. This can also be used in development environments for testing the security associated with a given principal and associated roles. RunAs Login Module The RunAs login module is a helper module that pushes a run-as role onto the stack for the duration of the login phase of authentication; it then pops the run-as role from the stack in either the commit or abort phase. The purpose of this login module is to provide a role for other login modules that must access secured resources to perform their authentication, for example, a login module that accesses a secured Jakarta Enterprise Beans. The RunAs login module must be configured ahead of the login modules that require a run as role established. Client Login Module The Client login module is an implementation of a login module for use by JBoss clients when establishing caller identity and credentials. This creates a new SecurityContext , assigns it a principal and a credential, and sets the SecurityContext to the ThreadLocal security context. The Client login module is the only supported mechanism for a client to establish the current thread's caller. Both standalone client applications and server environments, acting as JBoss Jakarta Enterprise Beans clients where the security environment has not been configured to use the JBoss EAP security subsystem transparently, must use the Client login module. Warning This login module does not perform any authentication. It merely copies the login information provided to it into the server Jakarta Enterprise Beans invocation layer for subsequent authentication on the server. Within JBoss EAP, this is only supported for switching a user's identity for in-JVM calls. This is not supported for remote clients to establish an identity. SPNEGO Login Module The SPNEGO login module is an implementation of a login module that establishes caller identity and credentials with a KDC. The module implements SPNEGO and is a part of the JBoss Negotiation project. This authentication can be used in the chained configuration with the AdvancedLdap login module to allow cooperation with an LDAP server. Web applications must also enable the NegotiationAuthenticator within the application to use this login module. RoleMapping Login Module The RoleMapping login module supports mapping roles that are the end result of the authentication process to one or more declarative roles. For example, if the authentication process has determined that the user John has the roles ldapAdmin and testAdmin , and the declarative role defined in the web.xml or ejb-jar.xml file for access is admin , then this login module maps the ldapAdmin and testAdmin roles to John. The RoleMapping login module must be defined as an optional module to a login module configuration as it alters mapping of the previously mapped roles. Remoting Login Module The Remoting login module checks if the request that is currently being authenticated was received over the remoting connection. In cases where the request was received using the remoting interface, that request is associated with the identity created during the authentication process. RealmDirect Login Module The RealmDirect login module allows an existing security realm to be used in making authentication and authorization decisions. When configured, this module will look up identity information using the referenced realm for making authentication decisions and mapping user roles. For example, the preconfigured other security domain that ships with JBoss EAP has a RealmDirect login module. If no realm is referenced in this module, the ApplicationRealm security realm is used by default. Custom Modules In cases where the login modules bundled with the JBoss EAP security framework do not meet the needs of the security environment, a custom login module implementation may be written. The AuthenticationManager requires a particular usage pattern of the Subject principals set. A full understanding of the Jakarta Authentication Subject class's information storage features and the expected usage of these features are required to write a login module that works with the AuthenticationManager. The UnauthenticatedIdentity login module option is also commonly used. There are certain cases when requests are not received in an authenticated format. The Unauthenticated Identity is a login module configuration option that assigns a specific identity, for example, guest , to requests that are made with no associated authentication information. This can be used to allow unprotected servlets to invoke methods on Jakarta Enterprise Beans that do not require a specific role. Such a principal has no associated roles and can only access either unsecured Jakarta Enterprise Beans methods that are associated with the unchecked permission constraint. 2.8.6.2. Password Stacking Multiple login modules can be chained together in a stack, with each login module providing the credentials verification and role assignment during authentication. This works for many use cases, but sometimes credentials verification and role assignment are split across multiple user management stores. Consider the case where users are managed in a central LDAP server and application-specific roles are stored in the application's relational database. The password-stacking module option captures this relationship. To use password stacking, each login module should set the password-stacking attribute to useFirstPass , which is located in the <module-option> section. If a module configured for password stacking has authenticated the user, all the other stacking modules will consider the user authenticated and only attempt to provide a set of roles for the authorization step. When the password-stacking option is set to useFirstPass , this module first looks for a shared username and password under the property names javax.security.auth.login.name and javax.security.auth.login.password , respectively, in the login module shared state map. If found, these properties are used as the principal name and password. If not found, the principal name and password are set by this login module and stored under the property names javax.security.auth.login.name and javax.security.auth.login.password , respectively. 2.8.6.3. Password Hashing Most login modules must compare a client-supplied password to a password stored in a user management system. These modules generally work with plain text passwords, but they can be configured to support hashed passwords to prevent plain text passwords from being stored on the server side. JBoss EAP supports the ability to configure the hashing algorithm, encoding, and character set. It also defines when the user password and store password are hashed. Important Red Hat JBoss Enterprise Application Platform Common Criteria certified configuration does not support hash algorithms weaker than SHA-256. 2.8.7. Security Management The security management portion of the security subsystem is used to override the high-level behaviors of the security subsystem. Each setting is optional. It is unusual to change any of these settings except for the deep copy subject mode. 2.8.7.1. Deep Copy Mode If the deep copy subject mode is disabled, which it is by default, copying a security data structure makes a reference to the original rather than copying the entire data structure. This behavior is more efficient, but it is prone to data corruption if multiple threads with the same identity clear the subject by means of a flush or logout operation. If the deep copy subject mode is enabled, a complete copy of the data structure, along with all its associated data as long as they are marked cloneable, is made. This is more thread-safe but less efficient. 2.8.8. Additional Components 2.8.8.1. Jakarta Authentication Jakarta Authentication is a pluggable interface for Java applications and is defined in Jakarta Authentication specification . In addition to Jakarta Authentication authentication, JBoss EAP allows for Jakarta Authentication to be used. Jakarta Authentication authentication is configured using login modules in a security domain, and those modules may be stacked. The jaspitest security domain is a simple Jakarta Authentication security domain that is included by default for development purposes. The web-based administration console provides the following operations to configure the Jakarta Authentication module: add edit remove reset Applications deployed to JBoss EAP require a special authenticator to be configured in their deployment descriptors to use the Jakarta Authentication security domains. 2.8.8.2. Jakarta Authorization Jakarta Authorization is a standard that defines a contract between containers and authorization service providers, which results in the implementation of providers for use by containers. For details about the specifications, see Jakarta Authorization 1.1 Specification . JBoss EAP implements support for Jakarta Authorization within the security functionality of the security subsystem. 2.8.8.3. Jakarta Security Jakarta Security defines portable plug-in interfaces for authentication and identity stores, and a new injectable-type SecurityContext interface that provides an access point for programmatic security. You can use the built-in implementations of these APIs, or define custom implementations. For details about the specifications, see Jakarta Security Specification . The Jakarta Security API is available in the elytron subsystem and can be enabled from the management CLI. For more information, see About Jakarta Security API in the Development Guide. 2.8.8.4. About Fine-Grained Authorization and XACML Fine-grained authorization allows administrators to adapt to the changing requirements and multiple variables involved in the decision making process for granting access to a module. As a result, fine-grained authorization can become complex. Note The XACML bindings, for web or Jakarta Enterprise Beans, are not supported in JBoss EAP. JBoss EAP uses XACML as a medium to achieve fine-grained authorization. XACML provides standards-based solution to the complex nature of achieving fine-grained authorization. XACML defines a policy language and an architecture for decision making. The XACML architecture includes a Policy Enforcement Point (PEP) which intercepts any requests in a normal program flow and asks a Policy Decision Point (PDP) to make an access decision based on the policies associated with the PDP. The PDP evaluates the XACML request created by the PEP and runs through the policies to make one of the following access decisions. PERMIT The access is approved. DENY The access is denied. INDETERMINATE There is an error at the PDP. NOTAPPLICABLE There is some attribute missing in the request or there is no policy match. XACML also has the following features: Oasis XACML v2.0 library JAXB v2.0 based object model ExistDB integration for storing and retrieving XACML policies and attributes 2.8.8.5. SSO JBoss EAP provides out-of-the-box support for clustered and non-clustered SSO using the undertow and infinispan subsystems. This requires: A configured security domain that handles authentication and authorization. The SSO infinispan replication cache. It is present in the ha and full-ha profiles for a managed domain, or by using the standalone-ha.xml or standalone-full-ha.xml configuration files for a standalone server. The web cache-container and SSO replication cache within it must be present. The undertow subsystem needs to be configured to use SSO. Each application that will share the SSO information must be configured to use the same security domain.
|
[
"/core-service=management/access=authorization:write-attribute(name=provider,value=rbac)",
"/core-service=management/access=authorization/role-mapping=Monitor:add()",
"/core-service=management/access=authorization/role-mapping=Monitor/include=user-timRO:add(name=timRO,type=USER)",
"/core-service=management/access=authorization/role-mapping=Monitor/include=group-LDAP_MONITORS:add(name=LDAP_MONITORS, type=GROUP)",
"/core-service=management/access=authorization/role-mapping=Monitor/exclude=group-LDAP_MONITORS:add(name=LDAP_, type=GROUP)",
"reload --host=master",
"reload"
] |
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/security_architecture/eap_security_out_of_the_box
|
Appendix B. Restoring manual changes overwritten by a Puppet run
|
Appendix B. Restoring manual changes overwritten by a Puppet run If your manual configuration has been overwritten by a Puppet run, you can restore the files to the state. The following example shows you how to restore a DHCP configuration file overwritten by a Puppet run. Procedure Copy the file you intend to restore. This allows you to compare the files to check for any mandatory changes required by the upgrade. This is not common for DNS or DHCP services. Check the log files to note down the md5sum of the overwritten file. For example: Restore the overwritten file: Compare the backup file and the restored file, and edit the restored file to include any mandatory changes required by the upgrade.
|
[
"cp /etc/dhcp/dhcpd.conf /etc/dhcp/dhcpd.backup",
"journalctl -xe /Stage[main]/Dhcp/File[/etc/dhcp/dhcpd.conf]: Filebucketed /etc/dhcp/dhcpd.conf to puppet with sum 622d9820b8e764ab124367c68f5fa3a1",
"puppet filebucket restore --local --bucket /var/lib/puppet/clientbucket /etc/dhcp/dhcpd.conf \\ 622d9820b8e764ab124367c68f5fa3a1"
] |
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/installing_satellite_server_in_a_disconnected_network_environment/restoring-manual-changes-overwritten-by-a-puppet-run_satellite
|
Chapter 1. Overview of deploying in external mode
|
Chapter 1. Overview of deploying in external mode Red Hat OpenShift Data Foundation can make services from an external Red Hat Ceph Storage cluster available for consumption through OpenShift Container Platform clusters running on any platform. See Planning your deployment for more information. For instructions regarding how to install a RHCS cluster, see the installation guide . Follow these steps to deploy OpenShift Data Foundation in external mode: Deploy OpenShift Data Foundation using Red Hat Ceph Storage . Deploy OpenShift Data Foundation using IBM FlashSystem . 1.1. Disaster recovery requirements Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites to successfully implement a disaster recovery solution: A valid Red Hat OpenShift Data Foundation Advanced subscription A valid Red Hat Advanced Cluster Management for Kubernetes subscription For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . For detailed disaster recovery solution requirements, see Configuring OpenShift Data Foundation Disaster Recovery for OpenShift Workloads guide, and Requirements and recommendations section of the Install guide in Red Hat Advanced Cluster Management for Kubernetes documentation. 1.2. Network ports required between OpenShift Container Platform and Ceph when using external mode deployment List of TCP ports, source OpenShift Container Platform and destination RHCS TCP ports To be used for 6789, 3300 Ceph Monitor 6800 - 7300 Ceph OSD, MGR, MDS 9283 Ceph MGR Prometheus Exporter For more information about why these ports are required, see Chapter 2. Ceph network configuration of RHCS Configuration Guide .
| null |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/deploying_openshift_data_foundation_in_external_mode/overview-of-deploying-in-external-mode_rhodf
|
Part V. Servers
|
Part V. Servers This part discusses various topics related to servers such as how to set up a web server or share files and directories over a network.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/system_administrators_guide/part-servers
|
Chapter 1. Red Hat Enterprise Linux 9
|
Chapter 1. Red Hat Enterprise Linux 9 This section outlines the packages released for Red Hat Enterprise Linux 9. 1.1. Red Hat Satellite Client 6 for RHEL 9 x86_64 (RPMs) The following table outlines the packages included in the satellite-client-6-for-rhel-9-x86_64-rpms repository. Table 1.1. Red Hat Satellite Client 6 for RHEL 9 x86_64 (RPMs) Name Version Advisory gofer 2.12.5-7.1.el9sat RHBA-2022:96562 katello-agent 3.5.7-3.el9sat RHBA-2022:96562 katello-host-tools 3.5.7-3.el9sat RHBA-2022:96562 katello-host-tools-tracer 3.5.7-3.el9sat RHBA-2022:96562 puppet-agent 7.16.0-2.el9sat RHBA-2022:96562 python3-gofer 2.12.5-7.1.el9sat RHBA-2022:96562 python3-gofer-proton 2.12.5-7.1.el9sat RHBA-2022:96562 python3-qpid-proton 0.35.0-2.el9 RHBA-2022:96562 qpid-proton-c 0.35.0-2.el9 RHBA-2022:96562 rubygem-foreman_scap_client 0.5.0-1.el9sat RHBA-2022:96562 1.2. Red Hat Satellite Client 6 for RHEL 9 ppc64le (RPMs) The following table outlines the packages included in the satellite-client-6-for-rhel-9-ppc64le-rpms repository. Table 1.2. Red Hat Satellite Client 6 for RHEL 9 ppc64le (RPMs) Name Version Advisory gofer 2.12.5-7.1.el9sat RHBA-2022:96562 katello-agent 3.5.7-3.el9sat RHBA-2022:96562 katello-host-tools 3.5.7-3.el9sat RHBA-2022:96562 katello-host-tools-tracer 3.5.7-3.el9sat RHBA-2022:96562 python3-gofer 2.12.5-7.1.el9sat RHBA-2022:96562 python3-gofer-proton 2.12.5-7.1.el9sat RHBA-2022:96562 python3-qpid-proton 0.35.0-2.el9 RHBA-2022:96562 qpid-proton-c 0.35.0-2.el9 RHBA-2022:96562 rubygem-foreman_scap_client 0.5.0-1.el9sat RHBA-2022:96562 1.3. Red Hat Satellite Client 6 for RHEL 9 s390x (RPMs) The following table outlines the packages included in the satellite-client-6-for-rhel-9-s390x-rpms repository. Table 1.3. Red Hat Satellite Client 6 for RHEL 9 s390x (RPMs) Name Version Advisory gofer 2.12.5-7.1.el9sat RHBA-2022:96562 katello-agent 3.5.7-3.el9sat RHBA-2022:96562 katello-host-tools 3.5.7-3.el9sat RHBA-2022:96562 katello-host-tools-tracer 3.5.7-3.el9sat RHBA-2022:96562 python3-gofer 2.12.5-7.1.el9sat RHBA-2022:96562 python3-gofer-proton 2.12.5-7.1.el9sat RHBA-2022:96562 python3-qpid-proton 0.35.0-2.el9 RHBA-2022:96562 qpid-proton-c 0.35.0-2.el9 RHBA-2022:96562 rubygem-foreman_scap_client 0.5.0-1.el9sat RHBA-2022:96562 1.4. Red Hat Satellite Client 6 for RHEL 9 aarch64 (RPMs) The following table outlines the packages included in the satellite-client-6-for-rhel-9-aarch64-rpms repository. Table 1.4. Red Hat Satellite Client 6 for RHEL 9 aarch64 (RPMs) Name Version Advisory gofer 2.12.5-7.1.el9sat RHBA-2022:96562 katello-agent 3.5.7-3.el9sat RHBA-2022:96562 katello-host-tools 3.5.7-3.el9sat RHBA-2022:96562 katello-host-tools-tracer 3.5.7-3.el9sat RHBA-2022:96562 python3-gofer 2.12.5-7.1.el9sat RHBA-2022:96562 python3-gofer-proton 2.12.5-7.1.el9sat RHBA-2022:96562 python3-qpid-proton 0.35.0-2.el9 RHBA-2022:96562 qpid-proton-c 0.35.0-2.el9 RHBA-2022:96562 rubygem-foreman_scap_client 0.5.0-1.el9sat RHBA-2022:96562
| null |
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/package_manifest/sat-6-15-rhel9
|
Machine APIs
|
Machine APIs OpenShift Container Platform 4.14 Reference guide for machine APIs Red Hat OpenShift Documentation Team
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html-single/machine_apis/index
|
Chapter 8. Network requirements
|
Chapter 8. Network requirements OpenShift Data Foundation requires that at least one network interface that is used for the cluster network to be capable of at least 10 gigabit network speeds. This section further covers different network considerations for planning deployments. 8.1. IPv6 support Red Hat OpenShift Data Foundation version 4.12 introduced the support of IPv6. IPv6 is supported in single stack only, and cannot be used simultaneously with IPv4. IPv6 is the default behavior in OpenShift Data Foundation when IPv6 is turned on in Openshift Container Platform. Red Hat OpenShift Data Foundation version 4.14 introduces IPv6 auto detection and configuration. Clusters using IPv6 will automatically be configured accordingly. OpenShift Container Platform dual stack with Red Hat OpenShift Data Foundation IPv4 is supported from version 4.13 and later. Dual stack on Red Hat OpenShift Data Foundation IPv6 is not supported. 8.2. Multi network plug-in (Multus) support OpenShift Data Foundation supports the ability to use multi-network plug-in Multus on bare metal infrastructures to improve security and performance by isolating the different types of network traffic. By using Multus, one or more network interfaces on hosts can be reserved for exclusive use of OpenShift Data Foundation. To use Multus, first run the Multus prerequisite validation tool. For instructions to use the tool, see OpenShift Data Foundation - Multus prerequisite validation tool . For more information about Multus networks, see Multiple networks . You can configure your Multus networks to use IPv4 or IPv6 as a technology preview. This works only for Multus networks that are pure IPv4 or pure IPv6. Networks cannot be mixed mode. Important Technology Preview features provide early access to upcoming product innovations, enabling you to test functionality and provide feedback during the development process. However, these features are not fully supported under Red Hat Service Level Agreements, may not be functionally complete, and are not intended for production use. As Red Hat considers making future iterations of Technology Preview features generally available, we will attempt to resolve any issues that customers experience when using these features. See Technology Preview Features Support Scope for more information. 8.2.1. Multus prerequisites In order for Ceph-CSI to communicate with a Multus-enabled CephCluster, some setup is required for Kubernetes hosts. These prerequisites require an understanding of how Multus networks are configured and how Rook uses them. This section will help clarify questions that could arise. Two basic requirements must be met: OpenShift hosts must be able to route successfully to the Multus public network. Pods on the Multus public network must be able to route successfully to OpenShift hosts. These two requirements can be broken down further as follows: For routing Kubernetes hosts to the Multus public network, each host must ensure the following: The host must have an interface connected to the Multus public network (the "public-network-interface"). The "public-network-interface" must have an IP address. A route must exist to direct traffic destined for pods on the Multus public network through the "public-network-interface". For routing pods on the Multus public network to Kubernetes hosts, the public NetworkAttachmentDefinition must be configured to ensure the following: The definition must have its IP Address Management (IPAM) configured to route traffic destined for nodes through the network. To ensure routing between the two networks works properly, no IP address assigned to a node can overlap with any IP address assigned to a pod on the Multus public network. Generally, both the NetworkAttachmentDefinition, and node configurations must use the same network technology (Macvlan) to connect to the Multus public network. Node configurations and pod configurations are interrelated and tightly coupled. Both must be planned at the same time, and OpenShift Data Foundation cannot support Multus public networks without both. The "public-network-interface" must be the same for both. Generally, the connection technology (Macvlan) should also be the same for both. IP range(s) in the NetworkAttachmentDefinition must be encoded as routes on nodes, and, in mirror, IP ranges for nodes must be encoded as routes in the NetworkAttachmentDefinition. Some installations might not want to use the same public network IP address range for both pods and nodes. In the case where there are different ranges for pods and nodes, additional steps must be taken to ensure each range routes to the other so that they act as a single, contiguous network.These requirements require careful planning. See Multus examples to help understand and implement these requirements. Tip There are often ten or more OpenShift Data Foundation pods per storage node. The pod address space usually needs to be several times larger (or more) than the host address space. OpenShift Container Platform recommends using the NMState operator's NodeNetworkConfigurationPolicies as a good method of configuring hosts to meet host requirements. Other methods can be used as well if needed. 8.2.1.1. Multus network address space sizing Networks must have enough addresses to account for the number of storage pods that will attach to the network, plus some additional space to account for failover events. It is highly recommended to also plan ahead for future storage cluster expansion and estimate how large the OpenShift Container Platform and OpenShift Data Foundation clusters may grow in the future. Reserving addresses for future expansion means that there is lower risk of depleting the IP address pool unexpectedly during expansion. It is safest to allocate 25% more addresses (or more) than the total maximum number of addresses that are expected to be needed at one time in the storage cluster's lifetime. This helps lower the risk of depleting the IP address pool during failover and maintenance. For ease of writing corresponding network CIDR configurations, rounding totals up to the nearest power of 2 is also recommended. Three ranges must be planned: If used, the public Network Attachment Definition address space must include enough IPs for the total number of ODF pods running in the openshift-storage namespace If used, the cluster Network Attachment Definition address space must include enough IPs for the total number of OSD pods running in the openshift-storage namespace If the Multus public network is used, the node public network address space must include enough IPs for the total number of OpenShift nodes connected to the Multus public network. Note If the cluster uses a unified address space for the public Network Attachment Definition and node public network attachments, add these two requirements together. This is relevant, for example, if DHCP is used to manage IPs for the public network. 8.2.1.1.1. Recommendation The following recommendation suffices for most organizations. The recommendation uses the last 6.25% (1/16) of the reserved private address space (192.168.0.0/16), assuming the beginning of the range is in use or otherwise desirable. Approximate maximums (accounting for 25% overhead) are given. Table 8.1. Multus recommendations Network Network range CIDR Approximate maximums Public Network Attachment Definition 192.168.240.0/21 1,600 total ODF pods Cluster Network Attachment Definition 192.168.248.0/22 800 OSDs Node public network attachments 192.168.252.0/23 400 total nodes 8.2.1.1.2. Calculation More detailed address space sizes can be determined as follows: Determine the maximum number of OSDs that are likely to be needed in the future. Add 25%, then add 5. Round the result up to the nearest power of 2. This is the cluster address space size. Begin with the un-rounded number calculated in step 1. Add 64, then add 25%. Round the result up to the nearest power of 2. This is the public address space size for pods. Determine the maximum number of total OpenShift nodes (including storage nodes) that are likely to be needed in the future. Add 25%. Round the result up to the nearest power of 2. This is the public address space size for nodes. 8.2.1.2. Verifying requirements have been met After configuring nodes and creating the Multus public NetworkAttachmentDefinition (see Creating network attachment definitions ) check that the node configurations and NetworkAttachmentDefinition configurations are compatible. To do so, verify that each node can ping pods via the public network. Start a daemonset similar to the following example: List the Multus public network IPs assigned to test pods using a command like the following example. This example command lists all IPs assigned to all test pods (each will have 2 IPs). From the output, it is easy to manually extract the IPs associated with the Multus public network. In the example, test pod IPs on the Multus public network are: 192.168.20.22 192.168.20.29 192.168.20.23 Check that each node (NODE) can reach all test pod IPs over the public network: If any node does not get a successful ping to a running pod, it is not safe to proceed. Diagnose and fix the issue, then repeat this testing. Some reasons you may encounter a problem include: The host may not be properly attached to the Multus public network (via Macvlan) The host may not be properly configured to route to the pod IP range The public NetworkAttachmentDefinition may not be properly configured to route back to the host IP range The host may have a firewall rule blocking the connection in either direction The network switch may have a firewall or security rule blocking the connection Suggested debugging steps: Ensure nodes can ping each other over using public network "shim" IPs Ensure the output of ip address 8.2.2. Multus examples The relevant network plan for this cluster is as follows: A dedicated NIC provides eth0 for the Multus public network Macvlan will be used to attach OpenShift pods to eth0 The IP range 192.168.0.0/16 is free in the example cluster - pods and nodes will share this IP range on the Multus public network Nodes will get the IP range 192.168.252.0/22 (this allows up to 1024 Kubernetes hosts, more than the example organization will ever need) Pods will get the remainder of the ranges (192.168.0.1 to 192.168.251.255) The example organization does not want to use DHCP unless necessary; therefore, nodes will have IPs on the Multus network (via eth0) assigned statically using the NMState operator 's NodeNetworkConfigurationPolicy resources With DHCP unavailable, Whereabouts will be used to assign IPs to the Multus public network because it is easy to use out of the box There are 3 compute nodes in the OpenShift cluster on which OpenShift Data Foundation also runs: compute-0, compute-1, and compute-2 Nodes' network policies must be configured to route to pods on the Multus public network. Because pods will be connecting via Macvlan, and because Macvlan does not allow hosts and pods to route between each other, the host must also be connected via Macvlan. Generally speaking, the host must connect to the Multus public network using the same technology that pods do. Pod connections are configured in the Network Attachment Definition. Because the host IP range is a subset of the whole range, hosts are not able to route to pods simply by IP assignment. A route must be added to hosts to allow them to route to the whole 192.168.0.0/16 range. NodeNetworkConfigurationPolicy desiredState specs will look like the following: For static IP management, each node must have a different NodeNetworkConfigurationPolicy. Select separate nodes for each policy to configure static networks. A "shim" interface is used to connect hosts to the Multus public network using the same technology as the Network Attachment Definition will use. The host's "shim" must be of the same type as planned for pods, macvlan in this example. The interface must match the Multus public network interface selected in planning, eth0 in this example. The ipv4 (or ipv6` ) section configures node IP addresses on the Multus public network. IPs assigned to this node's shim must match the plan. This example uses 192.168.252.0/22 for node IPs on the Multus public network. For static IP management, don't forget to change the IP for each node. The routes section instructs nodes how to reach pods on the Multus public network. The route destination(s) must match the CIDR range planned for pods. In this case, it is safe to use the entire 192.168.0.0/16 range because it won't affect nodes' ability to reach other nodes over their "shim" interfaces. In general, this must match the CIDR used in the Multus public NetworkAttachmentDefinition. The NetworkAttachmentDefinition for the public network would look like the following, using Whereabouts' exclude option to simplify the range request. The Whereabouts routes[].dst option ensures pods route to hosts via the Multus public network. This must match the plan for how to attach pods to the Multus public network. Nodes must attach using the same technology, Macvlan. The interface must match the Multus public network interface selected in planning, eth0 in this example. The plan for this example uses whereabouts instead of DHCP for assigning IPs to pods. For this example, it was decided that pods could be assigned any IP in the range 192.168.0.0/16 with the exception of a portion of the range allocated to nodes (see 5). whereabouts provides an exclude directive that allows easily excluding the range allocated for nodes from its pool. This allows keeping the range directive (see 4 ) simple. The routes section instructs pods how to reach nodes on the Multus public network. The route destination ( dst ) must match the CIDR range planned for nodes. 8.2.3. Holder pod deprecation Due to the recurring maintenance impact of holder pods during upgrade (holder pods are present when Multus is enabled), holder pods are deprecated in the ODF v4.17 release and targeted for removal in the ODF v4.17 release. This deprecation requires completing additional network configuration actions before removing the holder pods. In ODF v4.15, clusters with Multus enabled are upgraded to v4.16 following standard upgrade procedures. After the ODF cluster (with Multus enabled) is successfully upgraded to v4.16, administrators must then complete the procedure documented in the article Disabling Multus holder pods to disable and remove holder pods. Be aware that this disabling procedure is time consuming; however, it is not critical to complete the entire process immediately after upgrading to v4.16. It is critical to complete the process before ODF is upgraded to v4.17. 8.2.4. Segregating storage traffic using Multus By default, Red Hat OpenShift Data Foundation is configured to use the Red Hat OpenShift Software Defined Network (SDN). The default SDN carries the following types of traffic: Pod-to-pod traffic Pod-to-storage traffic, known as public network traffic when the storage is OpenShift Data Foundation OpenShift Data Foundation internal replication and rebalancing traffic, known as cluster network traffic There are three ways to segregate OpenShift Data Foundation from OpenShift default network: Reserve a network interface on the host for the public network of OpenShift Data Foundation Pod-to-storage and internal storage replication traffic coexist on a network that is isolated from pod-to-pod network traffic. Application pods have access to the maximum public network storage bandwidth when the OpenShift Data Foundation cluster is healthy. When the OpenShift Data Foundation cluster is recovering from failure, the application pods will have reduced bandwidth due to ongoing replication and rebalancing traffic. Reserve a network interface on the host for OpenShift Data Foundation's cluster network Pod-to-pod and pod-to-storage traffic both continue to use OpenShift's default network. Pod-to-storage bandwidth is less affected by the health of the OpenShift Data Foundation cluster. Pod-to-pod and pod-to-storage OpenShift Data Foundation traffic might contend for network bandwidth in busy OpenShift clusters. The storage internal network often has an overabundance of bandwidth that is unused, reserved for use during failures. Reserve two network interfaces on the host for OpenShift Data Foundation: one for the public network and one for the cluster network Pod-to-pod, pod-to-storage, and storage internal traffic are all isolated, and none of the traffic types will contend for resources. Service level agreements for all traffic types are more able to be ensured. During healthy runtime, more network bandwidth is reserved but unused across all three networks. Dual network interface segregated configuration schematic example: Triple network interface full segregated configuration schematic example: 8.2.5. When to use Multus Use Multus for OpenShift Data Foundation when you need the following: Improved latency - Multus with ODF always improves latency. Use host interfaces at near-host network speeds and bypass OpenShift's software-defined Pod network. You can also perform Linux per interface level tuning for each interface. Improved bandwidth - Dedicated interfaces for OpenShift Data Foundation client data traffic and internal data traffic. These dedicated interfaces reserve full bandwidth. Improved security - Multus isolates storage network traffic from application network traffic for added security. Bandwidth or performance might not be isolated when networks share an interface, however, you can use QoS or traffic shaping to prioritize bandwidth on shared interfaces. 8.2.6. Multus configuration To use Multus, you must create network attachment definitions (NADs) before deploying the OpenShift Data Foundation cluster, which is later attached to the cluster. For more information, see Creating network attachment definitions . To attach additional network interfaces to a pod, you must create configurations that define how the interfaces are attached. You specify each interface by using a NetworkAttachmentDefinition custom resource (CR). A Container Network Interface (CNI) configuration inside each of these CRs defines how that interface is created. OpenShift Data Foundation supports the macvlan driver, which includes the following features: Each connection gets a sub-interface of the parent interface with its own MAC address and is isolated from the host network. Uses less CPU and provides better throughput than Linux bridge or ipvlan . Bridge mode is almost always the best choice. Near-host performance when network interface card (NIC) supports virtual ports/virtual local area networks (VLANs) in hardware. OpenShift Data Foundation supports the following two types IP address management: whereabouts DHCP Uses OpenShift/Kubernetes leases to select unique IP addresses per Pod. Does not require range field. Does not require a DHCP server to provide IPs for Pods. Network DHCP server can give out the same range to Multus Pods as well as any other hosts on the same network. Caution If there is a DHCP server, ensure Multus configured IPAM does not give out the same range so that multiple MAC addresses on the network cannot have the same IP. 8.2.7. Requirements for Multus configuration Prerequisites The interface used for the public network must have the same interface name on each OpenShift storage and worker node, and the interfaces must all be connected to the same underlying network. The interface used for the cluster network must have the same interface name on each OpenShift storage node, and the interfaces must all be connected to the same underlying network. Cluster network interfaces do not have to be present on the OpenShift worker nodes. Each network interface used for the public or cluster network must be capable of at least 10 gigabit network speeds. Each network requires a separate virtual local area network (VLAN) or subnet. See Creating Multus networks for the necessary steps to configure a Multus based configuration on bare metal.
|
[
"apiVersion: apps/v1 kind: DaemonSet metadata: name: multus-public-test namespace: openshift-storage labels: app: multus-public-test spec: selector: matchLabels: app: multus-public-test template: metadata: labels: app: multus-public-test annotations: k8s.v1.cni.cncf.io/networks: openshift-storage/public-net # spec: containers: - name: test image: quay.io/ceph/ceph:v18 # image known to have 'ping' installed command: - sleep - infinity resources: {}",
"oc -n openshift-storage describe pod -l app=multus-public-test | grep -o -E 'Add .* from .*' Add eth0 [10.128.2.86/23] from ovn-kubernetes Add net1 [192.168.20.22/24] from default/public-net Add eth0 [10.129.2.173/23] from ovn-kubernetes Add net1 [192.168.20.29/24] from default/public-net Add eth0 [10.131.0.108/23] from ovn-kubernetes Add net1 [192.168.20.23/24] from default/public-net",
"oc debug node/NODE Starting pod/NODE-debug To use host binaries, run `chroot /host` Pod IP: **** If you don't see a command prompt, try pressing enter. sh-5.1# chroot /host sh-5.1# ping 192.168.20.22 PING 192.168.20.22 (192.168.20.22) 56(84) bytes of data. 64 bytes from 192.168.20.22: icmp_seq=1 ttl=64 time=0.093 ms 64 bytes from 192.168.20.22: icmp_seq=2 ttl=64 time=0.056 ms ^C --- 192.168.20.22 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1046ms rtt min/avg/max/mdev = 0.056/0.074/0.093/0.018 ms sh-5.1# ping 192.168.20.29 PING 192.168.20.29 (192.168.20.29) 56(84) bytes of data. 64 bytes from 192.168.20.29: icmp_seq=1 ttl=64 time=0.403 ms 64 bytes from 192.168.20.29: icmp_seq=2 ttl=64 time=0.181 ms ^C --- 192.168.20.29 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1007ms rtt min/avg/max/mdev = 0.181/0.292/0.403/0.111 ms sh-5.1# ping 192.168.20.23 PING 192.168.20.23 (192.168.20.23) 56(84) bytes of data. 64 bytes from 192.168.20.23: icmp_seq=1 ttl=64 time=0.329 ms 64 bytes from 192.168.20.23: icmp_seq=2 ttl=64 time=0.227 ms ^C --- 192.168.20.23 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1047ms rtt min/avg/max/mdev = 0.227/0.278/0.329/0.051 ms",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: ceph-public-net-shim-compute-0 namespace: openshift-storage spec: nodeSelector: node-role.kubernetes.io/worker: \"\" kubernetes.io/hostname: compute-0 desiredState: interfaces: - name: odf-pub-shim description: Shim interface used to connect host to OpenShift Data Foundation public Multus network type: mac-vlan state: up mac-vlan: base-iface: eth0 mode: bridge promiscuous: true ipv4: enabled: true dhcp: false address: - ip: 192.168.252.1 # STATIC IP FOR compute-0 prefix-length: 22 routes: config: - destination: 192.168.0.0/16 next-hop-interface: odf-pub-shim --- apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: ceph-public-net-shim-compute-1 namespace: openshift-storage spec: nodeSelector: node-role.kubernetes.io/worker: \"\" kubernetes.io/hostname: compute-1 desiredState: interfaces: - name: odf-pub-shim description: Shim interface used to connect host to OpenShift Data Foundation public Multus network type: mac-vlan state: up mac-vlan: base-iface: eth0 mode: bridge promiscuous: true ipv4: enabled: true dhcp: false address: - ip: 192.168.252.1 # STATIC IP FOR compute-1 prefix-length: 22 routes: config: - destination: 192.168.0.0/16 next-hop-interface: odf-pub-shim --- apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: ceph-public-net-shim-compute-2 # [1] namespace: openshift-storage spec: nodeSelector: node-role.kubernetes.io/worker: \"\" kubernetes.io/hostname: compute-2 # [2] desiredState: Interfaces: [3] - name: odf-pub-shim description: Shim interface used to connect host to OpenShift Data Foundation public Multus network type: mac-vlan # [4] state: up mac-vlan: base-iface: eth0 # [5] mode: bridge promiscuous: true ipv4: # [6] enabled: true dhcp: false address: - ip: 192.168.252.2 # STATIC IP FOR compute-2 # [7] prefix-length: 22 routes: # [8] config: - destination: 192.168.0.0/16 # [9] next-hop-interface: odf-pub-shim",
"apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: public-net namespace: openshift-storage spec: config: '{ \"cniVersion\": \"0.3.1\", \"type\": \"macvlan\", # [1] \"master\": \"eth0\", # [2] \"mode\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\", # [3] \"range\": \"192.168.0.0/16\", # [4] \"exclude\": [ \"192.168.252.0/22\" # [5] ], \"routes\": [ # [6] {\"dst\": \"192.168.252.0/22\"} # [7] ] } }'"
] |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/planning_your_deployment/network-requirements_rhodf
|
Chapter 3. Backup and recovery
|
Chapter 3. Backup and recovery For information about performing a backup and recovery of Ansible Automation Platform, see Backup and restore in Configuring automation execution . For information about troubleshooting backup and recovery for installations of Ansible Automation Platform Operator on OpenShift Container Platform, see the Troubleshooting section in Backup and recovery for operator environments .
| null |
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/troubleshooting_ansible_automation_platform/troubleshoot-backup-recovery
|
Chapter 12. SelfSubjectRulesReview [authorization.k8s.io/v1]
|
Chapter 12. SelfSubjectRulesReview [authorization.k8s.io/v1] Description SelfSubjectRulesReview enumerates the set of actions the current user can perform within a namespace. The returned list of actions may be incomplete depending on the server's authorization mode, and any errors experienced during the evaluation. SelfSubjectRulesReview should be used by UIs to show/hide actions, or to quickly let an end user reason about their permissions. It should NOT Be used by external systems to drive authorization decisions as this raises confused deputy, cache lifetime/revocation, and correctness concerns. SubjectAccessReview, and LocalAccessReview are the correct way to defer authorization decisions to the API server. Type object Required spec 12.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object SelfSubjectRulesReviewSpec defines the specification for SelfSubjectRulesReview. status object SubjectRulesReviewStatus contains the result of a rules check. This check can be incomplete depending on the set of authorizers the server is configured with and any errors experienced during evaluation. Because authorization rules are additive, if a rule appears in a list it's safe to assume the subject has that permission, even if that list is incomplete. 12.1.1. .spec Description SelfSubjectRulesReviewSpec defines the specification for SelfSubjectRulesReview. Type object Property Type Description namespace string Namespace to evaluate rules for. Required. 12.1.2. .status Description SubjectRulesReviewStatus contains the result of a rules check. This check can be incomplete depending on the set of authorizers the server is configured with and any errors experienced during evaluation. Because authorization rules are additive, if a rule appears in a list it's safe to assume the subject has that permission, even if that list is incomplete. Type object Required resourceRules nonResourceRules incomplete Property Type Description evaluationError string EvaluationError can appear in combination with Rules. It indicates an error occurred during rule evaluation, such as an authorizer that doesn't support rule evaluation, and that ResourceRules and/or NonResourceRules may be incomplete. incomplete boolean Incomplete is true when the rules returned by this call are incomplete. This is most commonly encountered when an authorizer, such as an external authorizer, doesn't support rules evaluation. nonResourceRules array NonResourceRules is the list of actions the subject is allowed to perform on non-resources. The list ordering isn't significant, may contain duplicates, and possibly be incomplete. nonResourceRules[] object NonResourceRule holds information that describes a rule for the non-resource resourceRules array ResourceRules is the list of actions the subject is allowed to perform on resources. The list ordering isn't significant, may contain duplicates, and possibly be incomplete. resourceRules[] object ResourceRule is the list of actions the subject is allowed to perform on resources. The list ordering isn't significant, may contain duplicates, and possibly be incomplete. 12.1.3. .status.nonResourceRules Description NonResourceRules is the list of actions the subject is allowed to perform on non-resources. The list ordering isn't significant, may contain duplicates, and possibly be incomplete. Type array 12.1.4. .status.nonResourceRules[] Description NonResourceRule holds information that describes a rule for the non-resource Type object Required verbs Property Type Description nonResourceURLs array (string) NonResourceURLs is a set of partial urls that a user should have access to. s are allowed, but only as the full, final step in the path. " " means all. verbs array (string) Verb is a list of kubernetes non-resource API verbs, like: get, post, put, delete, patch, head, options. "*" means all. 12.1.5. .status.resourceRules Description ResourceRules is the list of actions the subject is allowed to perform on resources. The list ordering isn't significant, may contain duplicates, and possibly be incomplete. Type array 12.1.6. .status.resourceRules[] Description ResourceRule is the list of actions the subject is allowed to perform on resources. The list ordering isn't significant, may contain duplicates, and possibly be incomplete. Type object Required verbs Property Type Description apiGroups array (string) APIGroups is the name of the APIGroup that contains the resources. If multiple API groups are specified, any action requested against one of the enumerated resources in any API group will be allowed. "*" means all. resourceNames array (string) ResourceNames is an optional white list of names that the rule applies to. An empty set means that everything is allowed. "*" means all. resources array (string) Resources is a list of resources this rule applies to. " " means all in the specified apiGroups. " /foo" represents the subresource 'foo' for all resources in the specified apiGroups. verbs array (string) Verb is a list of kubernetes resource API verbs, like: get, list, watch, create, update, delete, proxy. "*" means all. 12.2. API endpoints The following API endpoints are available: /apis/authorization.k8s.io/v1/selfsubjectrulesreviews POST : create a SelfSubjectRulesReview 12.2.1. /apis/authorization.k8s.io/v1/selfsubjectrulesreviews Table 12.1. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. pretty string If 'true', then the output is pretty printed. HTTP method POST Description create a SelfSubjectRulesReview Table 12.2. Body parameters Parameter Type Description body SelfSubjectRulesReview schema Table 12.3. HTTP responses HTTP code Reponse body 200 - OK SelfSubjectRulesReview schema 201 - Created SelfSubjectRulesReview schema 202 - Accepted SelfSubjectRulesReview schema 401 - Unauthorized Empty
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/authorization_apis/selfsubjectrulesreview-authorization-k8s-io-v1
|
Providing feedback on Red Hat JBoss Web Server documentation
|
Providing feedback on Red Hat JBoss Web Server documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Create creates and routes the issue to the appropriate documentation team.
| null |
https://docs.redhat.com/en/documentation/red_hat_jboss_web_server/6.0/html/red_hat_jboss_web_server_6.0_service_pack_2_release_notes/providing-direct-documentation-feedback_6.0.2_rn
|
Appendix B. Contact information
|
Appendix B. Contact information Red Hat Process Automation Manager documentation team: [email protected]
| null |
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/deploying_red_hat_process_automation_manager_on_red_hat_openshift_container_platform/author-group
|
function::user_int_warn
|
function::user_int_warn Name function::user_int_warn - Retrieves an int value stored in user space. Synopsis Arguments addr The user space address to retrieve the int from. General Syntax user_ing_warn:long(addr:long) Description Returns the int value from a given user space address. Returns zero when user space and warns (but does not abort) about the failure.
|
[
"function user_int_warn:long(addr:long)"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-user-int-warn
|
Chapter 11. Image [config.openshift.io/v1]
|
Chapter 11. Image [config.openshift.io/v1] Description Image governs policies related to imagestream imports and runtime configuration for external registries. It allows cluster admins to configure which registries OpenShift is allowed to import images from, extra CA trust bundles for external registries, and policies to block or allow registry hostnames. When exposing OpenShift's image registry to the public, this also lets cluster admins specify the external hostname. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 11.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec holds user settable values for configuration status object status holds observed values from the cluster. They may not be overridden. 11.1.1. .spec Description spec holds user settable values for configuration Type object Property Type Description additionalTrustedCA object additionalTrustedCA is a reference to a ConfigMap containing additional CAs that should be trusted during imagestream import, pod image pull, build image pull, and imageregistry pullthrough. The namespace for this config map is openshift-config. allowedRegistriesForImport array allowedRegistriesForImport limits the container image registries that normal users may import images from. Set this list to the registries that you trust to contain valid Docker images and that you want applications to be able to import from. Users with permission to create Images or ImageStreamMappings via the API are not affected by this policy - typically only administrators or system integrations will have those permissions. allowedRegistriesForImport[] object RegistryLocation contains a location of the registry specified by the registry domain name. The domain name might include wildcards, like '*' or '??'. externalRegistryHostnames array (string) externalRegistryHostnames provides the hostnames for the default external image registry. The external hostname should be set only when the image registry is exposed externally. The first value is used in 'publicDockerImageRepository' field in ImageStreams. The value must be in "hostname[:port]" format. registrySources object registrySources contains configuration that determines how the container runtime should treat individual registries when accessing images for builds+pods. (e.g. whether or not to allow insecure access). It does not contain configuration for the internal cluster registry. 11.1.2. .spec.additionalTrustedCA Description additionalTrustedCA is a reference to a ConfigMap containing additional CAs that should be trusted during imagestream import, pod image pull, build image pull, and imageregistry pullthrough. The namespace for this config map is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced config map 11.1.3. .spec.allowedRegistriesForImport Description allowedRegistriesForImport limits the container image registries that normal users may import images from. Set this list to the registries that you trust to contain valid Docker images and that you want applications to be able to import from. Users with permission to create Images or ImageStreamMappings via the API are not affected by this policy - typically only administrators or system integrations will have those permissions. Type array 11.1.4. .spec.allowedRegistriesForImport[] Description RegistryLocation contains a location of the registry specified by the registry domain name. The domain name might include wildcards, like '*' or '??'. Type object Property Type Description domainName string domainName specifies a domain name for the registry In case the registry use non-standard (80 or 443) port, the port should be included in the domain name as well. insecure boolean insecure indicates whether the registry is secure (https) or insecure (http) By default (if not specified) the registry is assumed as secure. 11.1.5. .spec.registrySources Description registrySources contains configuration that determines how the container runtime should treat individual registries when accessing images for builds+pods. (e.g. whether or not to allow insecure access). It does not contain configuration for the internal cluster registry. Type object Property Type Description allowedRegistries array (string) allowedRegistries are the only registries permitted for image pull and push actions. All other registries are denied. Only one of BlockedRegistries or AllowedRegistries may be set. blockedRegistries array (string) blockedRegistries cannot be used for image pull and push actions. All other registries are permitted. Only one of BlockedRegistries or AllowedRegistries may be set. containerRuntimeSearchRegistries array (string) containerRuntimeSearchRegistries are registries that will be searched when pulling images that do not have fully qualified domains in their pull specs. Registries will be searched in the order provided in the list. Note: this search list only works with the container runtime, i.e CRI-O. Will NOT work with builds or imagestream imports. insecureRegistries array (string) insecureRegistries are registries which do not have a valid TLS certificates or only support HTTP connections. 11.1.6. .status Description status holds observed values from the cluster. They may not be overridden. Type object Property Type Description externalRegistryHostnames array (string) externalRegistryHostnames provides the hostnames for the default external image registry. The external hostname should be set only when the image registry is exposed externally. The first value is used in 'publicDockerImageRepository' field in ImageStreams. The value must be in "hostname[:port]" format. internalRegistryHostname string internalRegistryHostname sets the hostname for the default internal image registry. The value must be in "hostname[:port]" format. This value is set by the image registry operator which controls the internal registry hostname. For backward compatibility, users can still use OPENSHIFT_DEFAULT_REGISTRY environment variable but this setting overrides the environment variable. 11.2. API endpoints The following API endpoints are available: /apis/config.openshift.io/v1/images DELETE : delete collection of Image GET : list objects of kind Image POST : create an Image /apis/config.openshift.io/v1/images/{name} DELETE : delete an Image GET : read the specified Image PATCH : partially update the specified Image PUT : replace the specified Image /apis/config.openshift.io/v1/images/{name}/status GET : read status of the specified Image PATCH : partially update status of the specified Image PUT : replace status of the specified Image 11.2.1. /apis/config.openshift.io/v1/images Table 11.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of Image Table 11.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 11.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Image Table 11.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 11.5. HTTP responses HTTP code Reponse body 200 - OK ImageList schema 401 - Unauthorized Empty HTTP method POST Description create an Image Table 11.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 11.7. Body parameters Parameter Type Description body Image schema Table 11.8. HTTP responses HTTP code Reponse body 200 - OK Image schema 201 - Created Image schema 202 - Accepted Image schema 401 - Unauthorized Empty 11.2.2. /apis/config.openshift.io/v1/images/{name} Table 11.9. Global path parameters Parameter Type Description name string name of the Image Table 11.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an Image Table 11.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 11.12. Body parameters Parameter Type Description body DeleteOptions schema Table 11.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Image Table 11.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 11.15. HTTP responses HTTP code Reponse body 200 - OK Image schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Image Table 11.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 11.17. Body parameters Parameter Type Description body Patch schema Table 11.18. HTTP responses HTTP code Reponse body 200 - OK Image schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Image Table 11.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 11.20. Body parameters Parameter Type Description body Image schema Table 11.21. HTTP responses HTTP code Reponse body 200 - OK Image schema 201 - Created Image schema 401 - Unauthorized Empty 11.2.3. /apis/config.openshift.io/v1/images/{name}/status Table 11.22. Global path parameters Parameter Type Description name string name of the Image Table 11.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified Image Table 11.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 11.25. HTTP responses HTTP code Reponse body 200 - OK Image schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Image Table 11.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 11.27. Body parameters Parameter Type Description body Patch schema Table 11.28. HTTP responses HTTP code Reponse body 200 - OK Image schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Image Table 11.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 11.30. Body parameters Parameter Type Description body Image schema Table 11.31. HTTP responses HTTP code Reponse body 200 - OK Image schema 201 - Created Image schema 401 - Unauthorized Empty
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/config_apis/image-config-openshift-io-v1
|
5.3. GFS2 File System Hangs and Requires Reboot of All Nodes
|
5.3. GFS2 File System Hangs and Requires Reboot of All Nodes If your GFS2 file system hangs and does not return commands run against it, requiring that you reboot all nodes in the cluster before using it, check for the following issues. You may have had a failed fence. GFS2 file systems will freeze to ensure data integrity in the event of a failed fence. Check the messages logs to see if there are any failed fences at the time of the hang. Ensure that fencing is configured correctly. The GFS2 file system may have withdrawn. Check through the messages logs for the word withdraw and check for any messages and calltraces from GFS2 indicating that the file system has been withdrawn. A withdraw is indicative of file system corruption, a storage failure, or a bug. Unmount the file system, update the gfs2-utils package, and execute the fsck command on the file system to return it to service. Open a support ticket with Red Hat Support. Inform them you experienced a GFS2 withdraw and provide sosreports with logs. For information on the GFS2 withdraw function, see Section 4.14, "The GFS2 Withdraw Function" . This error may be indicative of a locking problem or bug. Gather data during one of these occurrences and open a support ticket with Red Hat Support, as described in Section 5.2, "GFS2 File System Hangs and Requires Reboot of One Node" .
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/global_file_system_2/s1-gfs2hand-allnodes
|
Chapter 11. Preparing for users
|
Chapter 11. Preparing for users After installing OpenShift Container Platform, you can further expand and customize your cluster to your requirements, including taking steps to prepare for users. 11.1. Understanding identity provider configuration The OpenShift Container Platform control plane includes a built-in OAuth server. Developers and administrators obtain OAuth access tokens to authenticate themselves to the API. As an administrator, you can configure OAuth to specify an identity provider after you install your cluster. 11.1.1. About identity providers in OpenShift Container Platform By default, only a kubeadmin user exists on your cluster. To specify an identity provider, you must create a custom resource (CR) that describes that identity provider and add it to the cluster. Note OpenShift Container Platform user names containing / , : , and % are not supported. 11.1.2. Supported identity providers You can configure the following types of identity providers: Identity provider Description htpasswd Configure the htpasswd identity provider to validate user names and passwords against a flat file generated using htpasswd . Keystone Configure the keystone identity provider to integrate your OpenShift Container Platform cluster with Keystone to enable shared authentication with an OpenStack Keystone v3 server configured to store users in an internal database. LDAP Configure the ldap identity provider to validate user names and passwords against an LDAPv3 server, using simple bind authentication. Basic authentication Configure a basic-authentication identity provider for users to log in to OpenShift Container Platform with credentials validated against a remote identity provider. Basic authentication is a generic backend integration mechanism. Request header Configure a request-header identity provider to identify users from request header values, such as X-Remote-User . It is typically used in combination with an authenticating proxy, which sets the request header value. GitHub or GitHub Enterprise Configure a github identity provider to validate user names and passwords against GitHub or GitHub Enterprise's OAuth authentication server. GitLab Configure a gitlab identity provider to use GitLab.com or any other GitLab instance as an identity provider. Google Configure a google identity provider using Google's OpenID Connect integration . OpenID Connect Configure an oidc identity provider to integrate with an OpenID Connect identity provider using an Authorization Code Flow . After you define an identity provider, you can use RBAC to define and apply permissions . 11.1.3. Identity provider parameters The following parameters are common to all identity providers: Parameter Description name The provider name is prefixed to provider user names to form an identity name. mappingMethod Defines how new identities are mapped to users when they log in. Enter one of the following values: claim The default value. Provisions a user with the identity's preferred user name. Fails if a user with that user name is already mapped to another identity. lookup Looks up an existing identity, user identity mapping, and user, but does not automatically provision users or identities. This allows cluster administrators to set up identities and users manually, or using an external process. Using this method requires you to manually provision users. add Provisions a user with the identity's preferred user name. If a user with that user name already exists, the identity is mapped to the existing user, adding to any existing identity mappings for the user. Required when multiple identity providers are configured that identify the same set of users and map to the same user names. Note When adding or changing identity providers, you can map identities from the new provider to existing users by setting the mappingMethod parameter to add . 11.1.4. Sample identity provider CR The following custom resource (CR) shows the parameters and default values that you use to configure an identity provider. This example uses the htpasswd identity provider. Sample identity provider CR apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: my_identity_provider 1 mappingMethod: claim 2 type: HTPasswd htpasswd: fileData: name: htpass-secret 3 1 This provider name is prefixed to provider user names to form an identity name. 2 Controls how mappings are established between this provider's identities and User objects. 3 An existing secret containing a file generated using htpasswd . 11.2. Using RBAC to define and apply permissions Understand and apply role-based access control. 11.2.1. RBAC overview Role-based access control (RBAC) objects determine whether a user is allowed to perform a given action within a project. Cluster administrators can use the cluster roles and bindings to control who has various access levels to the OpenShift Container Platform platform itself and all projects. Developers can use local roles and bindings to control who has access to their projects. Note that authorization is a separate step from authentication, which is more about determining the identity of who is taking the action. Authorization is managed using: Authorization object Description Rules Sets of permitted verbs on a set of objects. For example, whether a user or service account can create pods. Roles Collections of rules. You can associate, or bind, users and groups to multiple roles. Bindings Associations between users and/or groups with a role. There are two levels of RBAC roles and bindings that control authorization: RBAC level Description Cluster RBAC Roles and bindings that are applicable across all projects. Cluster roles exist cluster-wide, and cluster role bindings can reference only cluster roles. Local RBAC Roles and bindings that are scoped to a given project. While local roles exist only in a single project, local role bindings can reference both cluster and local roles. A cluster role binding is a binding that exists at the cluster level. A role binding exists at the project level. The cluster role view must be bound to a user using a local role binding for that user to view the project. Create local roles only if a cluster role does not provide the set of permissions needed for a particular situation. This two-level hierarchy allows reuse across multiple projects through the cluster roles while allowing customization inside of individual projects through local roles. During evaluation, both the cluster role bindings and the local role bindings are used. For example: Cluster-wide "allow" rules are checked. Locally-bound "allow" rules are checked. Deny by default. 11.2.1.1. Default cluster roles OpenShift Container Platform includes a set of default cluster roles that you can bind to users and groups cluster-wide or locally. Important It is not recommended to manually modify the default cluster roles. Modifications to these system roles can prevent a cluster from functioning properly. Default cluster role Description admin A project manager. If used in a local binding, an admin has rights to view any resource in the project and modify any resource in the project except for quota. basic-user A user that can get basic information about projects and users. cluster-admin A super-user that can perform any action in any project. When bound to a user with a local binding, they have full control over quota and every action on every resource in the project. cluster-status A user that can get basic cluster status information. cluster-reader A user that can get or view most of the objects but cannot modify them. edit A user that can modify most objects in a project but does not have the power to view or modify roles or bindings. self-provisioner A user that can create their own projects. view A user who cannot make any modifications, but can see most objects in a project. They cannot view or modify roles or bindings. Be mindful of the difference between local and cluster bindings. For example, if you bind the cluster-admin role to a user by using a local role binding, it might appear that this user has the privileges of a cluster administrator. This is not the case. Binding the cluster-admin to a user in a project grants super administrator privileges for only that project to the user. That user has the permissions of the cluster role admin , plus a few additional permissions like the ability to edit rate limits, for that project. This binding can be confusing via the web console UI, which does not list cluster role bindings that are bound to true cluster administrators. However, it does list local role bindings that you can use to locally bind cluster-admin . The relationships between cluster roles, local roles, cluster role bindings, local role bindings, users, groups and service accounts are illustrated below. Warning The get pods/exec , get pods/* , and get * rules grant execution privileges when they are applied to a role. Apply the principle of least privilege and assign only the minimal RBAC rights required for users and agents. For more information, see RBAC rules allow execution privileges . 11.2.1.2. Evaluating authorization OpenShift Container Platform evaluates authorization by using: Identity The user name and list of groups that the user belongs to. Action The action you perform. In most cases, this consists of: Project : The project you access. A project is a Kubernetes namespace with additional annotations that allows a community of users to organize and manage their content in isolation from other communities. Verb : The action itself: get , list , create , update , delete , deletecollection , or watch . Resource name : The API endpoint that you access. Bindings The full list of bindings, the associations between users or groups with a role. OpenShift Container Platform evaluates authorization by using the following steps: The identity and the project-scoped action is used to find all bindings that apply to the user or their groups. Bindings are used to locate all the roles that apply. Roles are used to find all the rules that apply. The action is checked against each rule to find a match. If no matching rule is found, the action is then denied by default. Tip Remember that users and groups can be associated with, or bound to, multiple roles at the same time. Project administrators can use the CLI to view local roles and bindings, including a matrix of the verbs and resources each are associated with. Important The cluster role bound to the project administrator is limited in a project through a local binding. It is not bound cluster-wide like the cluster roles granted to the cluster-admin or system:admin . Cluster roles are roles defined at the cluster level but can be bound either at the cluster level or at the project level. 11.2.1.2.1. Cluster role aggregation The default admin, edit, view, and cluster-reader cluster roles support cluster role aggregation , where the cluster rules for each role are dynamically updated as new rules are created. This feature is relevant only if you extend the Kubernetes API by creating custom resources. 11.2.2. Projects and namespaces A Kubernetes namespace provides a mechanism to scope resources in a cluster. The Kubernetes documentation has more information on namespaces. Namespaces provide a unique scope for: Named resources to avoid basic naming collisions. Delegated management authority to trusted users. The ability to limit community resource consumption. Most objects in the system are scoped by namespace, but some are excepted and have no namespace, including nodes and users. A project is a Kubernetes namespace with additional annotations and is the central vehicle by which access to resources for regular users is managed. A project allows a community of users to organize and manage their content in isolation from other communities. Users must be given access to projects by administrators, or if allowed to create projects, automatically have access to their own projects. Projects can have a separate name , displayName , and description . The mandatory name is a unique identifier for the project and is most visible when using the CLI tools or API. The maximum name length is 63 characters. The optional displayName is how the project is displayed in the web console (defaults to name ). The optional description can be a more detailed description of the project and is also visible in the web console. Each project scopes its own set of: Object Description Objects Pods, services, replication controllers, etc. Policies Rules for which users can or cannot perform actions on objects. Constraints Quotas for each kind of object that can be limited. Service accounts Service accounts act automatically with designated access to objects in the project. Cluster administrators can create projects and delegate administrative rights for the project to any member of the user community. Cluster administrators can also allow developers to create their own projects. Developers and administrators can interact with projects by using the CLI or the web console. 11.2.3. Default projects OpenShift Container Platform comes with a number of default projects, and projects starting with openshift- are the most essential to users. These projects host master components that run as pods and other infrastructure components. The pods created in these namespaces that have a critical pod annotation are considered critical, and the have guaranteed admission by kubelet. Pods created for master components in these namespaces are already marked as critical. Note You cannot assign an SCC to pods created in one of the default namespaces: default , kube-system , kube-public , openshift-node , openshift-infra , and openshift . You cannot use these namespaces for running pods or services. 11.2.4. Viewing cluster roles and bindings You can use the oc CLI to view cluster roles and bindings by using the oc describe command. Prerequisites Install the oc CLI. Obtain permission to view the cluster roles and bindings. Users with the cluster-admin default cluster role bound cluster-wide can perform any action on any resource, including viewing cluster roles and bindings. Procedure To view the cluster roles and their associated rule sets: USD oc describe clusterrole.rbac Example output Name: admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- .packages.apps.redhat.com [] [] [* create update patch delete get list watch] imagestreams [] [] [create delete deletecollection get list patch update watch create get list watch] imagestreams.image.openshift.io [] [] [create delete deletecollection get list patch update watch create get list watch] secrets [] [] [create delete deletecollection get list patch update watch get list watch create delete deletecollection patch update] buildconfigs/webhooks [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs [] [] [create delete deletecollection get list patch update watch get list watch] buildlogs [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs/scale [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamimages [] [] [create delete deletecollection get list patch update watch get list watch] imagestreammappings [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamtags [] [] [create delete deletecollection get list patch update watch get list watch] processedtemplates [] [] [create delete deletecollection get list patch update watch get list watch] routes [] [] [create delete deletecollection get list patch update watch get list watch] templateconfigs [] [] [create delete deletecollection get list patch update watch get list watch] templateinstances [] [] [create delete deletecollection get list patch update watch get list watch] templates [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs.apps.openshift.io/scale [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs.apps.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs.build.openshift.io/webhooks [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs.build.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] buildlogs.build.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamimages.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreammappings.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamtags.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] routes.route.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] processedtemplates.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templateconfigs.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templateinstances.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templates.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] serviceaccounts [] [] [create delete deletecollection get list patch update watch impersonate create delete deletecollection patch update get list watch] imagestreams/secrets [] [] [create delete deletecollection get list patch update watch] rolebindings [] [] [create delete deletecollection get list patch update watch] roles [] [] [create delete deletecollection get list patch update watch] rolebindings.authorization.openshift.io [] [] [create delete deletecollection get list patch update watch] roles.authorization.openshift.io [] [] [create delete deletecollection get list patch update watch] imagestreams.image.openshift.io/secrets [] [] [create delete deletecollection get list patch update watch] rolebindings.rbac.authorization.k8s.io [] [] [create delete deletecollection get list patch update watch] roles.rbac.authorization.k8s.io [] [] [create delete deletecollection get list patch update watch] networkpolicies.extensions [] [] [create delete deletecollection patch update create delete deletecollection get list patch update watch get list watch] networkpolicies.networking.k8s.io [] [] [create delete deletecollection patch update create delete deletecollection get list patch update watch get list watch] configmaps [] [] [create delete deletecollection patch update get list watch] endpoints [] [] [create delete deletecollection patch update get list watch] persistentvolumeclaims [] [] [create delete deletecollection patch update get list watch] pods [] [] [create delete deletecollection patch update get list watch] replicationcontrollers/scale [] [] [create delete deletecollection patch update get list watch] replicationcontrollers [] [] [create delete deletecollection patch update get list watch] services [] [] [create delete deletecollection patch update get list watch] daemonsets.apps [] [] [create delete deletecollection patch update get list watch] deployments.apps/scale [] [] [create delete deletecollection patch update get list watch] deployments.apps [] [] [create delete deletecollection patch update get list watch] replicasets.apps/scale [] [] [create delete deletecollection patch update get list watch] replicasets.apps [] [] [create delete deletecollection patch update get list watch] statefulsets.apps/scale [] [] [create delete deletecollection patch update get list watch] statefulsets.apps [] [] [create delete deletecollection patch update get list watch] horizontalpodautoscalers.autoscaling [] [] [create delete deletecollection patch update get list watch] cronjobs.batch [] [] [create delete deletecollection patch update get list watch] jobs.batch [] [] [create delete deletecollection patch update get list watch] daemonsets.extensions [] [] [create delete deletecollection patch update get list watch] deployments.extensions/scale [] [] [create delete deletecollection patch update get list watch] deployments.extensions [] [] [create delete deletecollection patch update get list watch] ingresses.extensions [] [] [create delete deletecollection patch update get list watch] replicasets.extensions/scale [] [] [create delete deletecollection patch update get list watch] replicasets.extensions [] [] [create delete deletecollection patch update get list watch] replicationcontrollers.extensions/scale [] [] [create delete deletecollection patch update get list watch] poddisruptionbudgets.policy [] [] [create delete deletecollection patch update get list watch] deployments.apps/rollback [] [] [create delete deletecollection patch update] deployments.extensions/rollback [] [] [create delete deletecollection patch update] catalogsources.operators.coreos.com [] [] [create update patch delete get list watch] clusterserviceversions.operators.coreos.com [] [] [create update patch delete get list watch] installplans.operators.coreos.com [] [] [create update patch delete get list watch] packagemanifests.operators.coreos.com [] [] [create update patch delete get list watch] subscriptions.operators.coreos.com [] [] [create update patch delete get list watch] buildconfigs/instantiate [] [] [create] buildconfigs/instantiatebinary [] [] [create] builds/clone [] [] [create] deploymentconfigrollbacks [] [] [create] deploymentconfigs/instantiate [] [] [create] deploymentconfigs/rollback [] [] [create] imagestreamimports [] [] [create] localresourceaccessreviews [] [] [create] localsubjectaccessreviews [] [] [create] podsecuritypolicyreviews [] [] [create] podsecuritypolicyselfsubjectreviews [] [] [create] podsecuritypolicysubjectreviews [] [] [create] resourceaccessreviews [] [] [create] routes/custom-host [] [] [create] subjectaccessreviews [] [] [create] subjectrulesreviews [] [] [create] deploymentconfigrollbacks.apps.openshift.io [] [] [create] deploymentconfigs.apps.openshift.io/instantiate [] [] [create] deploymentconfigs.apps.openshift.io/rollback [] [] [create] localsubjectaccessreviews.authorization.k8s.io [] [] [create] localresourceaccessreviews.authorization.openshift.io [] [] [create] localsubjectaccessreviews.authorization.openshift.io [] [] [create] resourceaccessreviews.authorization.openshift.io [] [] [create] subjectaccessreviews.authorization.openshift.io [] [] [create] subjectrulesreviews.authorization.openshift.io [] [] [create] buildconfigs.build.openshift.io/instantiate [] [] [create] buildconfigs.build.openshift.io/instantiatebinary [] [] [create] builds.build.openshift.io/clone [] [] [create] imagestreamimports.image.openshift.io [] [] [create] routes.route.openshift.io/custom-host [] [] [create] podsecuritypolicyreviews.security.openshift.io [] [] [create] podsecuritypolicyselfsubjectreviews.security.openshift.io [] [] [create] podsecuritypolicysubjectreviews.security.openshift.io [] [] [create] jenkins.build.openshift.io [] [] [edit view view admin edit view] builds [] [] [get create delete deletecollection get list patch update watch get list watch] builds.build.openshift.io [] [] [get create delete deletecollection get list patch update watch get list watch] projects [] [] [get delete get delete get patch update] projects.project.openshift.io [] [] [get delete get delete get patch update] namespaces [] [] [get get list watch] pods/attach [] [] [get list watch create delete deletecollection patch update] pods/exec [] [] [get list watch create delete deletecollection patch update] pods/portforward [] [] [get list watch create delete deletecollection patch update] pods/proxy [] [] [get list watch create delete deletecollection patch update] services/proxy [] [] [get list watch create delete deletecollection patch update] routes/status [] [] [get list watch update] routes.route.openshift.io/status [] [] [get list watch update] appliedclusterresourcequotas [] [] [get list watch] bindings [] [] [get list watch] builds/log [] [] [get list watch] deploymentconfigs/log [] [] [get list watch] deploymentconfigs/status [] [] [get list watch] events [] [] [get list watch] imagestreams/status [] [] [get list watch] limitranges [] [] [get list watch] namespaces/status [] [] [get list watch] pods/log [] [] [get list watch] pods/status [] [] [get list watch] replicationcontrollers/status [] [] [get list watch] resourcequotas/status [] [] [get list watch] resourcequotas [] [] [get list watch] resourcequotausages [] [] [get list watch] rolebindingrestrictions [] [] [get list watch] deploymentconfigs.apps.openshift.io/log [] [] [get list watch] deploymentconfigs.apps.openshift.io/status [] [] [get list watch] controllerrevisions.apps [] [] [get list watch] rolebindingrestrictions.authorization.openshift.io [] [] [get list watch] builds.build.openshift.io/log [] [] [get list watch] imagestreams.image.openshift.io/status [] [] [get list watch] appliedclusterresourcequotas.quota.openshift.io [] [] [get list watch] imagestreams/layers [] [] [get update get] imagestreams.image.openshift.io/layers [] [] [get update get] builds/details [] [] [update] builds.build.openshift.io/details [] [] [update] Name: basic-user Labels: <none> Annotations: openshift.io/description: A user that can get basic information about projects. rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- selfsubjectrulesreviews [] [] [create] selfsubjectaccessreviews.authorization.k8s.io [] [] [create] selfsubjectrulesreviews.authorization.openshift.io [] [] [create] clusterroles.rbac.authorization.k8s.io [] [] [get list watch] clusterroles [] [] [get list] clusterroles.authorization.openshift.io [] [] [get list] storageclasses.storage.k8s.io [] [] [get list] users [] [~] [get] users.user.openshift.io [] [~] [get] projects [] [] [list watch] projects.project.openshift.io [] [] [list watch] projectrequests [] [] [list] projectrequests.project.openshift.io [] [] [list] Name: cluster-admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- *.* [] [] [*] [*] [] [*] ... To view the current set of cluster role bindings, which shows the users and groups that are bound to various roles: USD oc describe clusterrolebinding.rbac Example output Name: alertmanager-main Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: alertmanager-main Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount alertmanager-main openshift-monitoring Name: basic-users Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: basic-user Subjects: Kind Name Namespace ---- ---- --------- Group system:authenticated Name: cloud-credential-operator-rolebinding Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: cloud-credential-operator-role Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount default openshift-cloud-credential-operator Name: cluster-admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: cluster-admin Subjects: Kind Name Namespace ---- ---- --------- Group system:masters Name: cluster-admins Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: cluster-admin Subjects: Kind Name Namespace ---- ---- --------- Group system:cluster-admins User system:admin Name: cluster-api-manager-rolebinding Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: cluster-api-manager-role Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount default openshift-machine-api ... 11.2.5. Viewing local roles and bindings You can use the oc CLI to view local roles and bindings by using the oc describe command. Prerequisites Install the oc CLI. Obtain permission to view the local roles and bindings: Users with the cluster-admin default cluster role bound cluster-wide can perform any action on any resource, including viewing local roles and bindings. Users with the admin default cluster role bound locally can view and manage roles and bindings in that project. Procedure To view the current set of local role bindings, which show the users and groups that are bound to various roles for the current project: USD oc describe rolebinding.rbac To view the local role bindings for a different project, add the -n flag to the command: USD oc describe rolebinding.rbac -n joe-project Example output Name: admin Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User kube:admin Name: system:deployers Labels: <none> Annotations: openshift.io/description: Allows deploymentconfigs in this namespace to rollout pods in this namespace. It is auto-managed by a controller; remove subjects to disa... Role: Kind: ClusterRole Name: system:deployer Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount deployer joe-project Name: system:image-builders Labels: <none> Annotations: openshift.io/description: Allows builds in this namespace to push images to this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-builder Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount builder joe-project Name: system:image-pullers Labels: <none> Annotations: openshift.io/description: Allows all pods in this namespace to pull images from this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-puller Subjects: Kind Name Namespace ---- ---- --------- Group system:serviceaccounts:joe-project 11.2.6. Adding roles to users You can use the oc adm administrator CLI to manage the roles and bindings. Binding, or adding, a role to users or groups gives the user or group the access that is granted by the role. You can add and remove roles to and from users and groups using oc adm policy commands. You can bind any of the default cluster roles to local users or groups in your project. Procedure Add a role to a user in a specific project: USD oc adm policy add-role-to-user <role> <user> -n <project> For example, you can add the admin role to the alice user in joe project by running: USD oc adm policy add-role-to-user admin alice -n joe Tip You can alternatively apply the following YAML to add the role to the user: apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: admin-0 namespace: joe roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: admin subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: alice View the local role bindings and verify the addition in the output: USD oc describe rolebinding.rbac -n <project> For example, to view the local role bindings for the joe project: USD oc describe rolebinding.rbac -n joe Example output Name: admin Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User kube:admin Name: admin-0 Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User alice 1 Name: system:deployers Labels: <none> Annotations: openshift.io/description: Allows deploymentconfigs in this namespace to rollout pods in this namespace. It is auto-managed by a controller; remove subjects to disa... Role: Kind: ClusterRole Name: system:deployer Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount deployer joe Name: system:image-builders Labels: <none> Annotations: openshift.io/description: Allows builds in this namespace to push images to this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-builder Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount builder joe Name: system:image-pullers Labels: <none> Annotations: openshift.io/description: Allows all pods in this namespace to pull images from this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-puller Subjects: Kind Name Namespace ---- ---- --------- Group system:serviceaccounts:joe 1 The alice user has been added to the admins RoleBinding . 11.2.7. Creating a local role You can create a local role for a project and then bind it to a user. Procedure To create a local role for a project, run the following command: USD oc create role <name> --verb=<verb> --resource=<resource> -n <project> In this command, specify: <name> , the local role's name <verb> , a comma-separated list of the verbs to apply to the role <resource> , the resources that the role applies to <project> , the project name For example, to create a local role that allows a user to view pods in the blue project, run the following command: USD oc create role podview --verb=get --resource=pod -n blue To bind the new role to a user, run the following command: USD oc adm policy add-role-to-user podview user2 --role-namespace=blue -n blue 11.2.8. Creating a cluster role You can create a cluster role. Procedure To create a cluster role, run the following command: USD oc create clusterrole <name> --verb=<verb> --resource=<resource> In this command, specify: <name> , the local role's name <verb> , a comma-separated list of the verbs to apply to the role <resource> , the resources that the role applies to For example, to create a cluster role that allows a user to view pods, run the following command: USD oc create clusterrole podviewonly --verb=get --resource=pod 11.2.9. Local role binding commands When you manage a user or group's associated roles for local role bindings using the following operations, a project may be specified with the -n flag. If it is not specified, then the current project is used. You can use the following commands for local RBAC management. Table 11.1. Local role binding operations Command Description USD oc adm policy who-can <verb> <resource> Indicates which users can perform an action on a resource. USD oc adm policy add-role-to-user <role> <username> Binds a specified role to specified users in the current project. USD oc adm policy remove-role-from-user <role> <username> Removes a given role from specified users in the current project. USD oc adm policy remove-user <username> Removes specified users and all of their roles in the current project. USD oc adm policy add-role-to-group <role> <groupname> Binds a given role to specified groups in the current project. USD oc adm policy remove-role-from-group <role> <groupname> Removes a given role from specified groups in the current project. USD oc adm policy remove-group <groupname> Removes specified groups and all of their roles in the current project. 11.2.10. Cluster role binding commands You can also manage cluster role bindings using the following operations. The -n flag is not used for these operations because cluster role bindings use non-namespaced resources. Table 11.2. Cluster role binding operations Command Description USD oc adm policy add-cluster-role-to-user <role> <username> Binds a given role to specified users for all projects in the cluster. USD oc adm policy remove-cluster-role-from-user <role> <username> Removes a given role from specified users for all projects in the cluster. USD oc adm policy add-cluster-role-to-group <role> <groupname> Binds a given role to specified groups for all projects in the cluster. USD oc adm policy remove-cluster-role-from-group <role> <groupname> Removes a given role from specified groups for all projects in the cluster. 11.2.11. Creating a cluster admin The cluster-admin role is required to perform administrator level tasks on the OpenShift Container Platform cluster, such as modifying cluster resources. Prerequisites You must have created a user to define as the cluster admin. Procedure Define the user as a cluster admin: USD oc adm policy add-cluster-role-to-user cluster-admin <user> 11.3. The kubeadmin user OpenShift Container Platform creates a cluster administrator, kubeadmin , after the installation process completes. This user has the cluster-admin role automatically applied and is treated as the root user for the cluster. The password is dynamically generated and unique to your OpenShift Container Platform environment. After installation completes the password is provided in the installation program's output. For example: INFO Install complete! INFO Run 'export KUBECONFIG=<your working directory>/auth/kubeconfig' to manage the cluster with 'oc', the OpenShift CLI. INFO The cluster is ready when 'oc login -u kubeadmin -p <provided>' succeeds (wait a few minutes). INFO Access the OpenShift web-console here: https://console-openshift-console.apps.demo1.openshift4-beta-abcorp.com INFO Login to the console with user: kubeadmin, password: <provided> 11.3.1. Removing the kubeadmin user After you define an identity provider and create a new cluster-admin user, you can remove the kubeadmin to improve cluster security. Warning If you follow this procedure before another user is a cluster-admin , then OpenShift Container Platform must be reinstalled. It is not possible to undo this command. Prerequisites You must have configured at least one identity provider. You must have added the cluster-admin role to a user. You must be logged in as an administrator. Procedure Remove the kubeadmin secrets: USD oc delete secrets kubeadmin -n kube-system 11.4. Populating OperatorHub from mirrored Operator catalogs If you mirrored Operator catalogs for use with disconnected clusters, you can populate OperatorHub with the Operators from your mirrored catalogs. You can use the generated manifests from the mirroring process to create the required ImageContentSourcePolicy and CatalogSource objects. 11.4.1. Prerequisites Mirroring Operator catalogs for use with disconnected clusters 11.4.1.1. Creating the ImageContentSourcePolicy object After mirroring Operator catalog content to your mirror registry, create the required ImageContentSourcePolicy (ICSP) object. The ICSP object configures nodes to translate between the image references stored in Operator manifests and the mirrored registry. Procedure On a host with access to the disconnected cluster, create the ICSP by running the following command to specify the imageContentSourcePolicy.yaml file in your manifests directory: USD oc create -f <path/to/manifests/dir>/imageContentSourcePolicy.yaml where <path/to/manifests/dir> is the path to the manifests directory for your mirrored content. You can now create a CatalogSource object to reference your mirrored index image and Operator content. 11.4.1.2. Adding a catalog source to a cluster Adding a catalog source to an OpenShift Container Platform cluster enables the discovery and installation of Operators for users. Cluster administrators can create a CatalogSource object that references an index image. OperatorHub uses catalog sources to populate the user interface. Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. Prerequisites An index image built and pushed to a registry. Procedure Create a CatalogSource object that references your index image. If you used the oc adm catalog mirror command to mirror your catalog to a target registry, you can use the generated catalogSource.yaml file in your manifests directory as a starting point. Modify the following to your specifications and save it as a catalogSource.yaml file: apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-operator-catalog 1 namespace: openshift-marketplace 2 spec: sourceType: grpc grpcPodConfig: securityContextConfig: <security_mode> 3 image: <registry>/<namespace>/redhat-operator-index:v4.12 4 displayName: My Operator Catalog publisher: <publisher_name> 5 updateStrategy: registryPoll: 6 interval: 30m 1 If you mirrored content to local files before uploading to a registry, remove any backslash ( / ) characters from the metadata.name field to avoid an "invalid resource name" error when you create the object. 2 If you want the catalog source to be available globally to users in all namespaces, specify the openshift-marketplace namespace. Otherwise, you can specify a different namespace for the catalog to be scoped and available only for that namespace. 3 Specify the value of legacy or restricted . If the field is not set, the default value is legacy . In a future OpenShift Container Platform release, it is planned that the default value will be restricted . If your catalog cannot run with restricted permissions, it is recommended that you manually set this field to legacy . 4 Specify your index image. If you specify a tag after the image name, for example :v4.12 , the catalog source pod uses an image pull policy of Always , meaning the pod always pulls the image prior to starting the container. If you specify a digest, for example @sha256:<id> , the image pull policy is IfNotPresent , meaning the pod pulls the image only if it does not already exist on the node. 5 Specify your name or an organization name publishing the catalog. 6 Catalog sources can automatically check for new versions to keep up to date. Use the file to create the CatalogSource object: USD oc apply -f catalogSource.yaml Verify the following resources are created successfully. Check the pods: USD oc get pods -n openshift-marketplace Example output NAME READY STATUS RESTARTS AGE my-operator-catalog-6njx6 1/1 Running 0 28s marketplace-operator-d9f549946-96sgr 1/1 Running 0 26h Check the catalog source: USD oc get catalogsource -n openshift-marketplace Example output NAME DISPLAY TYPE PUBLISHER AGE my-operator-catalog My Operator Catalog grpc 5s Check the package manifest: USD oc get packagemanifest -n openshift-marketplace Example output NAME CATALOG AGE jaeger-product My Operator Catalog 93s You can now install the Operators from the OperatorHub page on your OpenShift Container Platform web console. Additional resources Accessing images for Operators from private registries Image template for custom catalog sources Image pull policy 11.5. About Operator installation with OperatorHub OperatorHub is a user interface for discovering Operators; it works in conjunction with Operator Lifecycle Manager (OLM), which installs and manages Operators on a cluster. As a cluster administrator, you can install an Operator from OperatorHub by using the OpenShift Container Platform web console or CLI. Subscribing an Operator to one or more namespaces makes the Operator available to developers on your cluster. During installation, you must determine the following initial settings for the Operator: Installation Mode Choose All namespaces on the cluster (default) to have the Operator installed on all namespaces or choose individual namespaces, if available, to only install the Operator on selected namespaces. This example chooses All namespaces... to make the Operator available to all users and projects. Update Channel If an Operator is available through multiple channels, you can choose which channel you want to subscribe to. For example, to deploy from the stable channel, if available, select it from the list. Approval Strategy You can choose automatic or manual updates. If you choose automatic updates for an installed Operator, when a new version of that Operator is available in the selected channel, Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without human intervention. If you select manual updates, when a newer version of an Operator is available, OLM creates an update request. As a cluster administrator, you must then manually approve that update request to have the Operator updated to the new version. 11.5.1. Installing from OperatorHub using the web console You can install and subscribe to an Operator from OperatorHub by using the OpenShift Container Platform web console. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. Procedure Navigate in the web console to the Operators OperatorHub page. Scroll or type a keyword into the Filter by keyword box to find the Operator you want. For example, type jaeger to find the Jaeger Operator. You can also filter options by Infrastructure Features . For example, select Disconnected if you want to see Operators that work in disconnected environments, also known as restricted network environments. Select the Operator to display additional information. Note Choosing a Community Operator warns that Red Hat does not certify Community Operators; you must acknowledge the warning before continuing. Read the information about the Operator and click Install . On the Install Operator page: Select one of the following: All namespaces on the cluster (default) installs the Operator in the default openshift-operators namespace to watch and be made available to all namespaces in the cluster. This option is not always available. A specific namespace on the cluster allows you to choose a specific, single namespace in which to install the Operator. The Operator will only watch and be made available for use in this single namespace. Select an Update Channel (if more than one is available). Select Automatic or Manual approval strategy, as described earlier. Click Install to make the Operator available to the selected namespaces on this OpenShift Container Platform cluster. If you selected a Manual approval strategy, the upgrade status of the subscription remains Upgrading until you review and approve the install plan. After approving on the Install Plan page, the subscription upgrade status moves to Up to date . If you selected an Automatic approval strategy, the upgrade status should resolve to Up to date without intervention. After the upgrade status of the subscription is Up to date , select Operators Installed Operators to verify that the cluster service version (CSV) of the installed Operator eventually shows up. The Status should ultimately resolve to InstallSucceeded in the relevant namespace. Note For the All namespaces... installation mode, the status resolves to InstallSucceeded in the openshift-operators namespace, but the status is Copied if you check in other namespaces. If it does not: Check the logs in any pods in the openshift-operators project (or other relevant namespace if A specific namespace... installation mode was selected) on the Workloads Pods page that are reporting issues to troubleshoot further. 11.5.2. Installing from OperatorHub using the CLI Instead of using the OpenShift Container Platform web console, you can install an Operator from OperatorHub by using the CLI. Use the oc command to create or update a Subscription object. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. Install the oc command to your local system. Procedure View the list of Operators available to the cluster from OperatorHub: USD oc get packagemanifests -n openshift-marketplace Example output NAME CATALOG AGE 3scale-operator Red Hat Operators 91m advanced-cluster-management Red Hat Operators 91m amq7-cert-manager Red Hat Operators 91m ... couchbase-enterprise-certified Certified Operators 91m crunchy-postgres-operator Certified Operators 91m mongodb-enterprise Certified Operators 91m ... etcd Community Operators 91m jaeger Community Operators 91m kubefed Community Operators 91m ... Note the catalog for your desired Operator. Inspect your desired Operator to verify its supported install modes and available channels: USD oc describe packagemanifests <operator_name> -n openshift-marketplace An Operator group, defined by an OperatorGroup object, selects target namespaces in which to generate required RBAC access for all Operators in the same namespace as the Operator group. The namespace to which you subscribe the Operator must have an Operator group that matches the install mode of the Operator, either the AllNamespaces or SingleNamespace mode. If the Operator you intend to install uses the AllNamespaces mode, the openshift-operators namespace already has the appropriate global-operators Operator group in place. However, if the Operator uses the SingleNamespace mode and you do not already have an appropriate Operator group in place, you must create one. Note The web console version of this procedure handles the creation of the OperatorGroup and Subscription objects automatically behind the scenes for you when choosing SingleNamespace mode. You can only have one Operator group per namespace. For more information, see "Operator groups". Create an OperatorGroup object YAML file, for example operatorgroup.yaml : Example OperatorGroup object apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <operatorgroup_name> namespace: <namespace> spec: targetNamespaces: - <namespace> Warning Operator Lifecycle Manager (OLM) creates the following cluster roles for each Operator group: <operatorgroup_name>-admin <operatorgroup_name>-edit <operatorgroup_name>-view When you manually create an Operator group, you must specify a unique name that does not conflict with the existing cluster roles or other Operator groups on the cluster. Create the OperatorGroup object: USD oc apply -f operatorgroup.yaml Create a Subscription object YAML file to subscribe a namespace to an Operator, for example sub.yaml : Example Subscription object apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: <subscription_name> namespace: openshift-operators 1 spec: channel: <channel_name> 2 name: <operator_name> 3 source: redhat-operators 4 sourceNamespace: openshift-marketplace 5 config: env: 6 - name: ARGS value: "-v=10" envFrom: 7 - secretRef: name: license-secret volumes: 8 - name: <volume_name> configMap: name: <configmap_name> volumeMounts: 9 - mountPath: <directory_name> name: <volume_name> tolerations: 10 - operator: "Exists" resources: 11 requests: memory: "64Mi" cpu: "250m" limits: memory: "128Mi" cpu: "500m" nodeSelector: 12 foo: bar 1 For default AllNamespaces install mode usage, specify the openshift-operators namespace. Alternatively, you can specify a custom global namespace, if you have created one. Otherwise, specify the relevant single namespace for SingleNamespace install mode usage. 2 Name of the channel to subscribe to. 3 Name of the Operator to subscribe to. 4 Name of the catalog source that provides the Operator. 5 Namespace of the catalog source. Use openshift-marketplace for the default OperatorHub catalog sources. 6 The env parameter defines a list of Environment Variables that must exist in all containers in the pod created by OLM. 7 The envFrom parameter defines a list of sources to populate Environment Variables in the container. 8 The volumes parameter defines a list of Volumes that must exist on the pod created by OLM. 9 The volumeMounts parameter defines a list of VolumeMounts that must exist in all containers in the pod created by OLM. If a volumeMount references a volume that does not exist, OLM fails to deploy the Operator. 10 The tolerations parameter defines a list of Tolerations for the pod created by OLM. 11 The resources parameter defines resource constraints for all the containers in the pod created by OLM. 12 The nodeSelector parameter defines a NodeSelector for the pod created by OLM. Create the Subscription object: USD oc apply -f sub.yaml At this point, OLM is now aware of the selected Operator. A cluster service version (CSV) for the Operator should appear in the target namespace, and APIs provided by the Operator should be available for creation. Additional resources About OperatorGroups
|
[
"apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: my_identity_provider 1 mappingMethod: claim 2 type: HTPasswd htpasswd: fileData: name: htpass-secret 3",
"oc describe clusterrole.rbac",
"Name: admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- .packages.apps.redhat.com [] [] [* create update patch delete get list watch] imagestreams [] [] [create delete deletecollection get list patch update watch create get list watch] imagestreams.image.openshift.io [] [] [create delete deletecollection get list patch update watch create get list watch] secrets [] [] [create delete deletecollection get list patch update watch get list watch create delete deletecollection patch update] buildconfigs/webhooks [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs [] [] [create delete deletecollection get list patch update watch get list watch] buildlogs [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs/scale [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamimages [] [] [create delete deletecollection get list patch update watch get list watch] imagestreammappings [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamtags [] [] [create delete deletecollection get list patch update watch get list watch] processedtemplates [] [] [create delete deletecollection get list patch update watch get list watch] routes [] [] [create delete deletecollection get list patch update watch get list watch] templateconfigs [] [] [create delete deletecollection get list patch update watch get list watch] templateinstances [] [] [create delete deletecollection get list patch update watch get list watch] templates [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs.apps.openshift.io/scale [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs.apps.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs.build.openshift.io/webhooks [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs.build.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] buildlogs.build.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamimages.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreammappings.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamtags.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] routes.route.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] processedtemplates.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templateconfigs.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templateinstances.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templates.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] serviceaccounts [] [] [create delete deletecollection get list patch update watch impersonate create delete deletecollection patch update get list watch] imagestreams/secrets [] [] [create delete deletecollection get list patch update watch] rolebindings [] [] [create delete deletecollection get list patch update watch] roles [] [] [create delete deletecollection get list patch update watch] rolebindings.authorization.openshift.io [] [] [create delete deletecollection get list patch update watch] roles.authorization.openshift.io [] [] [create delete deletecollection get list patch update watch] imagestreams.image.openshift.io/secrets [] [] [create delete deletecollection get list patch update watch] rolebindings.rbac.authorization.k8s.io [] [] [create delete deletecollection get list patch update watch] roles.rbac.authorization.k8s.io [] [] [create delete deletecollection get list patch update watch] networkpolicies.extensions [] [] [create delete deletecollection patch update create delete deletecollection get list patch update watch get list watch] networkpolicies.networking.k8s.io [] [] [create delete deletecollection patch update create delete deletecollection get list patch update watch get list watch] configmaps [] [] [create delete deletecollection patch update get list watch] endpoints [] [] [create delete deletecollection patch update get list watch] persistentvolumeclaims [] [] [create delete deletecollection patch update get list watch] pods [] [] [create delete deletecollection patch update get list watch] replicationcontrollers/scale [] [] [create delete deletecollection patch update get list watch] replicationcontrollers [] [] [create delete deletecollection patch update get list watch] services [] [] [create delete deletecollection patch update get list watch] daemonsets.apps [] [] [create delete deletecollection patch update get list watch] deployments.apps/scale [] [] [create delete deletecollection patch update get list watch] deployments.apps [] [] [create delete deletecollection patch update get list watch] replicasets.apps/scale [] [] [create delete deletecollection patch update get list watch] replicasets.apps [] [] [create delete deletecollection patch update get list watch] statefulsets.apps/scale [] [] [create delete deletecollection patch update get list watch] statefulsets.apps [] [] [create delete deletecollection patch update get list watch] horizontalpodautoscalers.autoscaling [] [] [create delete deletecollection patch update get list watch] cronjobs.batch [] [] [create delete deletecollection patch update get list watch] jobs.batch [] [] [create delete deletecollection patch update get list watch] daemonsets.extensions [] [] [create delete deletecollection patch update get list watch] deployments.extensions/scale [] [] [create delete deletecollection patch update get list watch] deployments.extensions [] [] [create delete deletecollection patch update get list watch] ingresses.extensions [] [] [create delete deletecollection patch update get list watch] replicasets.extensions/scale [] [] [create delete deletecollection patch update get list watch] replicasets.extensions [] [] [create delete deletecollection patch update get list watch] replicationcontrollers.extensions/scale [] [] [create delete deletecollection patch update get list watch] poddisruptionbudgets.policy [] [] [create delete deletecollection patch update get list watch] deployments.apps/rollback [] [] [create delete deletecollection patch update] deployments.extensions/rollback [] [] [create delete deletecollection patch update] catalogsources.operators.coreos.com [] [] [create update patch delete get list watch] clusterserviceversions.operators.coreos.com [] [] [create update patch delete get list watch] installplans.operators.coreos.com [] [] [create update patch delete get list watch] packagemanifests.operators.coreos.com [] [] [create update patch delete get list watch] subscriptions.operators.coreos.com [] [] [create update patch delete get list watch] buildconfigs/instantiate [] [] [create] buildconfigs/instantiatebinary [] [] [create] builds/clone [] [] [create] deploymentconfigrollbacks [] [] [create] deploymentconfigs/instantiate [] [] [create] deploymentconfigs/rollback [] [] [create] imagestreamimports [] [] [create] localresourceaccessreviews [] [] [create] localsubjectaccessreviews [] [] [create] podsecuritypolicyreviews [] [] [create] podsecuritypolicyselfsubjectreviews [] [] [create] podsecuritypolicysubjectreviews [] [] [create] resourceaccessreviews [] [] [create] routes/custom-host [] [] [create] subjectaccessreviews [] [] [create] subjectrulesreviews [] [] [create] deploymentconfigrollbacks.apps.openshift.io [] [] [create] deploymentconfigs.apps.openshift.io/instantiate [] [] [create] deploymentconfigs.apps.openshift.io/rollback [] [] [create] localsubjectaccessreviews.authorization.k8s.io [] [] [create] localresourceaccessreviews.authorization.openshift.io [] [] [create] localsubjectaccessreviews.authorization.openshift.io [] [] [create] resourceaccessreviews.authorization.openshift.io [] [] [create] subjectaccessreviews.authorization.openshift.io [] [] [create] subjectrulesreviews.authorization.openshift.io [] [] [create] buildconfigs.build.openshift.io/instantiate [] [] [create] buildconfigs.build.openshift.io/instantiatebinary [] [] [create] builds.build.openshift.io/clone [] [] [create] imagestreamimports.image.openshift.io [] [] [create] routes.route.openshift.io/custom-host [] [] [create] podsecuritypolicyreviews.security.openshift.io [] [] [create] podsecuritypolicyselfsubjectreviews.security.openshift.io [] [] [create] podsecuritypolicysubjectreviews.security.openshift.io [] [] [create] jenkins.build.openshift.io [] [] [edit view view admin edit view] builds [] [] [get create delete deletecollection get list patch update watch get list watch] builds.build.openshift.io [] [] [get create delete deletecollection get list patch update watch get list watch] projects [] [] [get delete get delete get patch update] projects.project.openshift.io [] [] [get delete get delete get patch update] namespaces [] [] [get get list watch] pods/attach [] [] [get list watch create delete deletecollection patch update] pods/exec [] [] [get list watch create delete deletecollection patch update] pods/portforward [] [] [get list watch create delete deletecollection patch update] pods/proxy [] [] [get list watch create delete deletecollection patch update] services/proxy [] [] [get list watch create delete deletecollection patch update] routes/status [] [] [get list watch update] routes.route.openshift.io/status [] [] [get list watch update] appliedclusterresourcequotas [] [] [get list watch] bindings [] [] [get list watch] builds/log [] [] [get list watch] deploymentconfigs/log [] [] [get list watch] deploymentconfigs/status [] [] [get list watch] events [] [] [get list watch] imagestreams/status [] [] [get list watch] limitranges [] [] [get list watch] namespaces/status [] [] [get list watch] pods/log [] [] [get list watch] pods/status [] [] [get list watch] replicationcontrollers/status [] [] [get list watch] resourcequotas/status [] [] [get list watch] resourcequotas [] [] [get list watch] resourcequotausages [] [] [get list watch] rolebindingrestrictions [] [] [get list watch] deploymentconfigs.apps.openshift.io/log [] [] [get list watch] deploymentconfigs.apps.openshift.io/status [] [] [get list watch] controllerrevisions.apps [] [] [get list watch] rolebindingrestrictions.authorization.openshift.io [] [] [get list watch] builds.build.openshift.io/log [] [] [get list watch] imagestreams.image.openshift.io/status [] [] [get list watch] appliedclusterresourcequotas.quota.openshift.io [] [] [get list watch] imagestreams/layers [] [] [get update get] imagestreams.image.openshift.io/layers [] [] [get update get] builds/details [] [] [update] builds.build.openshift.io/details [] [] [update] Name: basic-user Labels: <none> Annotations: openshift.io/description: A user that can get basic information about projects. rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- selfsubjectrulesreviews [] [] [create] selfsubjectaccessreviews.authorization.k8s.io [] [] [create] selfsubjectrulesreviews.authorization.openshift.io [] [] [create] clusterroles.rbac.authorization.k8s.io [] [] [get list watch] clusterroles [] [] [get list] clusterroles.authorization.openshift.io [] [] [get list] storageclasses.storage.k8s.io [] [] [get list] users [] [~] [get] users.user.openshift.io [] [~] [get] projects [] [] [list watch] projects.project.openshift.io [] [] [list watch] projectrequests [] [] [list] projectrequests.project.openshift.io [] [] [list] Name: cluster-admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- *.* [] [] [*] [*] [] [*]",
"oc describe clusterrolebinding.rbac",
"Name: alertmanager-main Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: alertmanager-main Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount alertmanager-main openshift-monitoring Name: basic-users Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: basic-user Subjects: Kind Name Namespace ---- ---- --------- Group system:authenticated Name: cloud-credential-operator-rolebinding Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: cloud-credential-operator-role Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount default openshift-cloud-credential-operator Name: cluster-admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: cluster-admin Subjects: Kind Name Namespace ---- ---- --------- Group system:masters Name: cluster-admins Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: cluster-admin Subjects: Kind Name Namespace ---- ---- --------- Group system:cluster-admins User system:admin Name: cluster-api-manager-rolebinding Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: cluster-api-manager-role Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount default openshift-machine-api",
"oc describe rolebinding.rbac",
"oc describe rolebinding.rbac -n joe-project",
"Name: admin Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User kube:admin Name: system:deployers Labels: <none> Annotations: openshift.io/description: Allows deploymentconfigs in this namespace to rollout pods in this namespace. It is auto-managed by a controller; remove subjects to disa Role: Kind: ClusterRole Name: system:deployer Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount deployer joe-project Name: system:image-builders Labels: <none> Annotations: openshift.io/description: Allows builds in this namespace to push images to this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-builder Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount builder joe-project Name: system:image-pullers Labels: <none> Annotations: openshift.io/description: Allows all pods in this namespace to pull images from this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-puller Subjects: Kind Name Namespace ---- ---- --------- Group system:serviceaccounts:joe-project",
"oc adm policy add-role-to-user <role> <user> -n <project>",
"oc adm policy add-role-to-user admin alice -n joe",
"apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: admin-0 namespace: joe roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: admin subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: alice",
"oc describe rolebinding.rbac -n <project>",
"oc describe rolebinding.rbac -n joe",
"Name: admin Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User kube:admin Name: admin-0 Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User alice 1 Name: system:deployers Labels: <none> Annotations: openshift.io/description: Allows deploymentconfigs in this namespace to rollout pods in this namespace. It is auto-managed by a controller; remove subjects to disa Role: Kind: ClusterRole Name: system:deployer Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount deployer joe Name: system:image-builders Labels: <none> Annotations: openshift.io/description: Allows builds in this namespace to push images to this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-builder Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount builder joe Name: system:image-pullers Labels: <none> Annotations: openshift.io/description: Allows all pods in this namespace to pull images from this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-puller Subjects: Kind Name Namespace ---- ---- --------- Group system:serviceaccounts:joe",
"oc create role <name> --verb=<verb> --resource=<resource> -n <project>",
"oc create role podview --verb=get --resource=pod -n blue",
"oc adm policy add-role-to-user podview user2 --role-namespace=blue -n blue",
"oc create clusterrole <name> --verb=<verb> --resource=<resource>",
"oc create clusterrole podviewonly --verb=get --resource=pod",
"oc adm policy add-cluster-role-to-user cluster-admin <user>",
"INFO Install complete! INFO Run 'export KUBECONFIG=<your working directory>/auth/kubeconfig' to manage the cluster with 'oc', the OpenShift CLI. INFO The cluster is ready when 'oc login -u kubeadmin -p <provided>' succeeds (wait a few minutes). INFO Access the OpenShift web-console here: https://console-openshift-console.apps.demo1.openshift4-beta-abcorp.com INFO Login to the console with user: kubeadmin, password: <provided>",
"oc delete secrets kubeadmin -n kube-system",
"oc create -f <path/to/manifests/dir>/imageContentSourcePolicy.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-operator-catalog 1 namespace: openshift-marketplace 2 spec: sourceType: grpc grpcPodConfig: securityContextConfig: <security_mode> 3 image: <registry>/<namespace>/redhat-operator-index:v4.12 4 displayName: My Operator Catalog publisher: <publisher_name> 5 updateStrategy: registryPoll: 6 interval: 30m",
"oc apply -f catalogSource.yaml",
"oc get pods -n openshift-marketplace",
"NAME READY STATUS RESTARTS AGE my-operator-catalog-6njx6 1/1 Running 0 28s marketplace-operator-d9f549946-96sgr 1/1 Running 0 26h",
"oc get catalogsource -n openshift-marketplace",
"NAME DISPLAY TYPE PUBLISHER AGE my-operator-catalog My Operator Catalog grpc 5s",
"oc get packagemanifest -n openshift-marketplace",
"NAME CATALOG AGE jaeger-product My Operator Catalog 93s",
"oc get packagemanifests -n openshift-marketplace",
"NAME CATALOG AGE 3scale-operator Red Hat Operators 91m advanced-cluster-management Red Hat Operators 91m amq7-cert-manager Red Hat Operators 91m couchbase-enterprise-certified Certified Operators 91m crunchy-postgres-operator Certified Operators 91m mongodb-enterprise Certified Operators 91m etcd Community Operators 91m jaeger Community Operators 91m kubefed Community Operators 91m",
"oc describe packagemanifests <operator_name> -n openshift-marketplace",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <operatorgroup_name> namespace: <namespace> spec: targetNamespaces: - <namespace>",
"oc apply -f operatorgroup.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: <subscription_name> namespace: openshift-operators 1 spec: channel: <channel_name> 2 name: <operator_name> 3 source: redhat-operators 4 sourceNamespace: openshift-marketplace 5 config: env: 6 - name: ARGS value: \"-v=10\" envFrom: 7 - secretRef: name: license-secret volumes: 8 - name: <volume_name> configMap: name: <configmap_name> volumeMounts: 9 - mountPath: <directory_name> name: <volume_name> tolerations: 10 - operator: \"Exists\" resources: 11 requests: memory: \"64Mi\" cpu: \"250m\" limits: memory: \"128Mi\" cpu: \"500m\" nodeSelector: 12 foo: bar",
"oc apply -f sub.yaml"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/post-installation_configuration/post-install-preparing-for-users
|
8.62. freeradius
|
8.62. freeradius 8.62.1. RHEA-2014:1609 - freeradius enhancement update Updated freeradius packages that add one enhancement are now available for Red Hat Enterprise Linux 6. FreeRADIUS is a high-performance and highly configurable free Remote Authentication Dial In User Service (RADIUS) server, designed to allow centralized authentication and authorization for a network. Enhancement BZ# 1107843 Under certain conditions, the proxy server needs the ability to time out to the home server in less than a second. With this update, three new features addressing this requirement have been added: The home server's "response_window" configuration option now accepts fractional values with down to microsecond precision and the minimum of one millisecond. The "response_window" configuration option with the same precision is now also supported in client sections to enable lowering of the home server's response window for specific clients. The "response_timeouts" configuration option is now supported in home server sections, allowing to specify the number of times when a request is permitted to miss the response window before the home server enters the defunct state. Users of freeradius are advised to upgrade to these updated packages, which add this enhancement.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/freeradius
|
Using the Cryostat dashboard
|
Using the Cryostat dashboard Red Hat build of Cryostat 2 Red Hat Customer Content Services
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/2/html/using_the_cryostat_dashboard/index
|
7.9 Release Notes
|
7.9 Release Notes Red Hat Enterprise Linux 7 Release Notes for Red Hat Enterprise Linux 7.9 Red Hat Customer Content Services
| null |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.9_release_notes/index
|
Chapter 2. Understanding OpenShift updates
|
Chapter 2. Understanding OpenShift updates 2.1. Introduction to OpenShift updates With OpenShift Container Platform 4, you can update an OpenShift Container Platform cluster with a single operation by using the web console or the OpenShift CLI ( oc ). Platform administrators can view new update options either by going to Administration Cluster Settings in the web console or by looking at the output of the oc adm upgrade command. Red Hat hosts a public OpenShift Update Service (OSUS), which serves a graph of update possibilities based on the OpenShift Container Platform release images in the official registry. The graph contains update information for any public OCP release. OpenShift Container Platform clusters are configured to connect to the OSUS by default, and the OSUS responds to clusters with information about known update targets. An update begins when either a cluster administrator or an automatic update controller edits the custom resource (CR) of the Cluster Version Operator (CVO) with a new version. To reconcile the cluster with the newly specified version, the CVO retrieves the target release image from an image registry and begins to apply changes to the cluster. Note Operators previously installed through Operator Lifecycle Manager (OLM) follow a different process for updates. See Updating installed Operators for more information. The target release image contains manifest files for all cluster components that form a specific OCP version. When updating the cluster to a new version, the CVO applies manifests in separate stages called Runlevels. Most, but not all, manifests support one of the cluster Operators. As the CVO applies a manifest to a cluster Operator, the Operator might perform update tasks to reconcile itself with its new specified version. The CVO monitors the state of each applied resource and the states reported by all cluster Operators. The CVO only proceeds with the update when all manifests and cluster Operators in the active Runlevel reach a stable condition. After the CVO updates the entire control plane through this process, the Machine Config Operator (MCO) updates the operating system and configuration of every node in the cluster. 2.1.1. Common questions about update availability There are several factors that affect if and when an update is made available to an OpenShift Container Platform cluster. The following list provides common questions regarding the availability of an update: What are the differences between each of the update channels? A new release is initially added to the candidate channel. After successful final testing, a release on the candidate channel is promoted to the fast channel, an errata is published, and the release is now fully supported. After a delay, a release on the fast channel is finally promoted to the stable channel. This delay represents the only difference between the fast and stable channels. Note For the latest z-stream releases, this delay may generally be a week or two. However, the delay for initial updates to the latest minor version may take much longer, generally 45-90 days. Releases promoted to the stable channel are simultaneously promoted to the eus channel. The primary purpose of the eus channel is to serve as a convenience for clusters performing an EUS-to-EUS update. Is a release on the stable channel safer or more supported than a release on the fast channel? If a regression is identified for a release on a fast channel, it will be resolved and managed to the same extent as if that regression was identified for a release on the stable channel. The only difference between releases on the fast and stable channels is that a release only appears on the stable channel after it has been on the fast channel for some time, which provides more time for new update risks to be discovered. A release that is available on the fast channel always becomes available on the stable channel after this delay. What does it mean if an update is supported but not recommended? Red Hat continuously evaluates data from multiple sources to determine whether updates from one version to another lead to issues. If an issue is identified, an update path may no longer be recommended to users. However, even if the update path is not recommended, customers are still supported if they perform the update. Red Hat does not block users from updating to a certain version. Red Hat may declare conditional update risks, which may or may not apply to a particular cluster. Declared risks provide cluster administrators more context about a supported update. Cluster administrators can still accept the risk and update to that particular target version. This update is always supported despite not being recommended in the context of the conditional risk. What if I see that an update to a particular release is no longer recommended? If Red Hat removes update recommendations from any supported release due to a regression, a superseding update recommendation will be provided to a future version that corrects the regression. There may be a delay while the defect is corrected, tested, and promoted to your selected channel. How long until the z-stream release is made available on the fast and stable channels? While the specific cadence can vary based on a number of factors, new z-stream releases for the latest minor version are typically made available about every week. Older minor versions, which have become more stable over time, may take much longer for new z-stream releases to be made available. Important These are only estimates based on past data about z-stream releases. Red Hat reserves the right to change the release frequency as needed. Any number of issues could cause irregularities and delays in this release cadence. Once a z-stream release is published, it also appears in the fast channel for that minor version. After a delay, the z-stream release may then appear in that minor version's stable channel. Additional resources Understanding update channels and releases 2.1.2. About the OpenShift Update Service The OpenShift Update Service (OSUS) provides update recommendations to OpenShift Container Platform, including Red Hat Enterprise Linux CoreOS (RHCOS). It provides a graph, or diagram, that contains the vertices of component Operators and the edges that connect them. The edges in the graph show which versions you can safely update to. The vertices are update payloads that specify the intended state of the managed cluster components. The Cluster Version Operator (CVO) in your cluster checks with the OpenShift Update Service to see the valid updates and update paths based on current component versions and information in the graph. When you request an update, the CVO uses the corresponding release image to update your cluster. The release artifacts are hosted in Quay as container images. To allow the OpenShift Update Service to provide only compatible updates, a release verification pipeline drives automation. Each release artifact is verified for compatibility with supported cloud platforms and system architectures, as well as other component packages. After the pipeline confirms the suitability of a release, the OpenShift Update Service notifies you that it is available. Important The OpenShift Update Service displays all recommended updates for your current cluster. If an update path is not recommended by the OpenShift Update Service, it might be because of a known issue with the update or the target release. Two controllers run during continuous update mode. The first controller continuously updates the payload manifests, applies the manifests to the cluster, and outputs the controlled rollout status of the Operators to indicate whether they are available, upgrading, or failed. The second controller polls the OpenShift Update Service to determine if updates are available. Important Only updating to a newer version is supported. Reverting or rolling back your cluster to a version is not supported. If your update fails, contact Red Hat support. During the update process, the Machine Config Operator (MCO) applies the new configuration to your cluster machines. The MCO cordons the number of nodes specified by the maxUnavailable field on the machine configuration pool and marks them unavailable. By default, this value is set to 1 . The MCO updates the affected nodes alphabetically by zone, based on the topology.kubernetes.io/zone label. If a zone has more than one node, the oldest nodes are updated first. For nodes that do not use zones, such as in bare metal deployments, the nodes are updated by age, with the oldest nodes updated first. The MCO updates the number of nodes as specified by the maxUnavailable field on the machine configuration pool at a time. The MCO then applies the new configuration and reboots the machine. If you use Red Hat Enterprise Linux (RHEL) machines as workers, the MCO does not update the kubelet because you must update the OpenShift API on the machines first. With the specification for the new version applied to the old kubelet, the RHEL machine cannot return to the Ready state. You cannot complete the update until the machines are available. However, the maximum number of unavailable nodes is set to ensure that normal cluster operations can continue with that number of machines out of service. The OpenShift Update Service is composed of an Operator and one or more application instances. 2.1.3. Common terms Control plane The control plane , which is composed of control plane machines, manages the OpenShift Container Platform cluster. The control plane machines manage workloads on the compute machines, which are also known as worker machines. Cluster Version Operator The Cluster Version Operator (CVO) starts the update process for the cluster. It checks with OSUS based on the current cluster version and retrieves the graph which contains available or possible update paths. Machine Config Operator The Machine Config Operator (MCO) is a cluster-level Operator that manages the operating system and machine configurations. Through the MCO, platform administrators can configure and update systemd, CRI-O and Kubelet, the kernel, NetworkManager, and other system features on the worker nodes. OpenShift Update Service The OpenShift Update Service (OSUS) provides over-the-air updates to OpenShift Container Platform, including to Red Hat Enterprise Linux CoreOS (RHCOS). It provides a graph, or diagram, that contains the vertices of component Operators and the edges that connect them. Channels Channels declare an update strategy tied to minor versions of OpenShift Container Platform. The OSUS uses this configured strategy to recommend update edges consistent with that strategy. Recommended update edge A recommended update edge is a recommended update between OpenShift Container Platform releases. Whether a given update is recommended can depend on the cluster's configured channel, current version, known bugs, and other information. OSUS communicates the recommended edges to the CVO, which runs in every cluster. Extended Update Support All post-4.7 even-numbered minor releases are labeled as Extended Update Support (EUS) releases. These releases introduce a verified update path between EUS releases, permitting customers to streamline updates of worker nodes and formulate update strategies of EUS-to-EUS OpenShift Container Platform releases that result in fewer reboots of worker nodes. For more information, see Red Hat OpenShift Extended Update Support (EUS) Overview . Additional resources Machine config overview Using the OpenShift Update Service in a disconnected environment Update channels 2.1.4. Additional resources For more detailed information about each major aspect of the update process, see How cluster updates work . 2.2. How cluster updates work The following sections describe each major aspect of the OpenShift Container Platform (OCP) update process in detail. For a general overview of how updates work, see the Introduction to OpenShift updates . 2.2.1. The Cluster Version Operator The Cluster Version Operator (CVO) is the primary component that orchestrates and facilitates the OpenShift Container Platform update process. During installation and standard cluster operation, the CVO is constantly comparing the manifests of managed cluster Operators to in-cluster resources, and reconciling discrepancies to ensure that the actual state of these resources match their desired state. 2.2.1.1. The ClusterVersion object One of the resources that the Cluster Version Operator (CVO) monitors is the ClusterVersion resource. Administrators and OpenShift components can communicate or interact with the CVO through the ClusterVersion object. The desired CVO state is declared through the ClusterVersion object and the current CVO state is reflected in the object's status. Note Do not directly modify the ClusterVersion object. Instead, use interfaces such as the oc CLI or the web console to declare your update target. The CVO continually reconciles the cluster with the target state declared in the spec property of the ClusterVersion resource. When the desired release differs from the actual release, that reconciliation updates the cluster. Update availability data The ClusterVersion resource also contains information about updates that are available to the cluster. This includes updates that are available, but not recommended due to a known risk that applies to the cluster. These updates are known as conditional updates. To learn how the CVO maintains this information about available updates in the ClusterVersion resource, see the "Evaluation of update availability" section. You can inspect all available updates with the following command: USD oc adm upgrade --include-not-recommended Note The additional --include-not-recommended parameter includes updates that are available but not recommended due to a known risk that applies to the cluster. Example output Cluster version is 4.10.22 Upstream is unset, so the cluster will use an appropriate default. Channel: fast-4.11 (available channels: candidate-4.10, candidate-4.11, eus-4.10, fast-4.10, fast-4.11, stable-4.10) Recommended updates: VERSION IMAGE 4.10.26 quay.io/openshift-release-dev/ocp-release@sha256:e1fa1f513068082d97d78be643c369398b0e6820afab708d26acda2262940954 4.10.25 quay.io/openshift-release-dev/ocp-release@sha256:ed84fb3fbe026b3bbb4a2637ddd874452ac49c6ead1e15675f257e28664879cc 4.10.24 quay.io/openshift-release-dev/ocp-release@sha256:aab51636460b5a9757b736a29bc92ada6e6e6282e46b06e6fd483063d590d62a 4.10.23 quay.io/openshift-release-dev/ocp-release@sha256:e40e49d722cb36a95fa1c03002942b967ccbd7d68de10e003f0baa69abad457b Supported but not recommended updates: Version: 4.11.0 Image: quay.io/openshift-release-dev/ocp-release@sha256:300bce8246cf880e792e106607925de0a404484637627edf5f517375517d54a4 Recommended: False Reason: RPMOSTreeTimeout Message: Nodes with substantial numbers of containers and CPU contention may not reconcile machine configuration https://bugzilla.redhat.com/show_bug.cgi?id=2111817#c22 The oc adm upgrade command queries the ClusterVersion resource for information about available updates and presents it in a human-readable format. One way to directly inspect the underlying availability data created by the CVO is by querying the ClusterVersion resource with the following command: USD oc get clusterversion version -o json | jq '.status.availableUpdates' Example output [ { "channels": [ "candidate-4.11", "candidate-4.12", "fast-4.11", "fast-4.12" ], "image": "quay.io/openshift-release-dev/ocp-release@sha256:400267c7f4e61c6bfa0a59571467e8bd85c9188e442cbd820cc8263809be3775", "url": "https://access.redhat.com/errata/RHBA-2023:3213", "version": "4.11.41" }, ... ] A similar command can be used to check conditional updates: USD oc get clusterversion version -o json | jq '.status.conditionalUpdates' Example output [ { "conditions": [ { "lastTransitionTime": "2023-05-30T16:28:59Z", "message": "The 4.11.36 release only resolves an installation issue https://issues.redhat.com//browse/OCPBUGS-11663 , which does not affect already running clusters. 4.11.36 does not include fixes delivered in recent 4.11.z releases and therefore upgrading from these versions would cause fixed bugs to reappear. Red Hat does not recommend upgrading clusters to 4.11.36 version for this reason. https://access.redhat.com/solutions/7007136", "reason": "PatchesOlderRelease", "status": "False", "type": "Recommended" } ], "release": { "channels": [...], "image": "quay.io/openshift-release-dev/ocp-release@sha256:8c04176b771a62abd801fcda3e952633566c8b5ff177b93592e8e8d2d1f8471d", "url": "https://access.redhat.com/errata/RHBA-2023:1733", "version": "4.11.36" }, "risks": [...] }, ... ] 2.2.1.2. Evaluation of update availability The Cluster Version Operator (CVO) periodically queries the OpenShift Update Service (OSUS) for the most recent data about update possibilities. This data is based on the cluster's subscribed channel. The CVO then saves information about update recommendations into either the availableUpdates or conditionalUpdates field of its ClusterVersion resource. The CVO periodically checks the conditional updates for update risks. These risks are conveyed through the data served by the OSUS, which contains information for each version about known issues that might affect a cluster updated to that version. Most risks are limited to clusters with specific characteristics, such as clusters with a certain size or clusters that are deployed in a particular cloud platform. The CVO continuously evaluates its cluster characteristics against the conditional risk information for each conditional update. If the CVO finds that the cluster matches the criteria, the CVO stores this information in the conditionalUpdates field of its ClusterVersion resource. If the CVO finds that the cluster does not match the risks of an update, or that there are no risks associated with the update, it stores the target version in the availableUpdates field of its ClusterVersion resource. The user interface, either the web console or the OpenShift CLI ( oc ), presents this information in sectioned headings to the administrator. Each supported but not recommended update recommendation contains a link to further resources about the risk so that the administrator can make an informed decision about the update. Additional resources Update recommendation removals and Conditional Updates 2.2.2. Release images A release image is the delivery mechanism for a specific OpenShift Container Platform (OCP) version. It contains the release metadata, a Cluster Version Operator (CVO) binary matching the release version, every manifest needed to deploy individual OpenShift cluster Operators, and a list of SHA digest-versioned references to all container images that make up this OpenShift version. You can inspect the content of a specific release image by running the following command: USD oc adm release extract <release image> Example output USD oc adm release extract quay.io/openshift-release-dev/ocp-release:4.12.6-x86_64 Extracted release payload from digest sha256:800d1e39d145664975a3bb7cbc6e674fbf78e3c45b5dde9ff2c5a11a8690c87b created at 2023-03-01T12:46:29Z USD ls 0000_03_authorization-openshift_01_rolebindingrestriction.crd.yaml 0000_03_config-operator_01_proxy.crd.yaml 0000_03_marketplace-operator_01_operatorhub.crd.yaml 0000_03_marketplace-operator_02_operatorhub.cr.yaml 0000_03_quota-openshift_01_clusterresourcequota.crd.yaml 1 ... 0000_90_service-ca-operator_02_prometheusrolebinding.yaml 2 0000_90_service-ca-operator_03_servicemonitor.yaml 0000_99_machine-api-operator_00_tombstones.yaml image-references 3 release-metadata 1 Manifest for ClusterResourceQuota CRD, to be applied on Runlevel 03 2 Manifest for PrometheusRoleBinding resource for the service-ca-operator , to be applied on Runlevel 90 3 List of SHA digest-versioned references to all required images 2.2.3. Update process workflow The following steps represent a detailed workflow of the OpenShift Container Platform (OCP) update process: The target version is stored in the spec.desiredUpdate.version field of the ClusterVersion resource, which may be managed through the web console or the CLI. The Cluster Version Operator (CVO) detects that the desiredUpdate in the ClusterVersion resource differs from the current cluster version. Using graph data from the OpenShift Update Service, the CVO resolves the desired cluster version to a pull spec for the release image. The CVO validates the integrity and authenticity of the release image. Red Hat publishes cryptographically-signed statements about published release images at predefined locations by using image SHA digests as unique and immutable release image identifiers. The CVO utilizes a list of built-in public keys to validate the presence and signatures of the statement matching the checked release image. The CVO creates a job named version-USDversion-USDhash in the openshift-cluster-version namespace. This job uses containers that are executing the release image, so the cluster downloads the image through the container runtime. The job then extracts the manifests and metadata from the release image to a shared volume that is accessible to the CVO. The CVO validates the extracted manifests and metadata. The CVO checks some preconditions to ensure that no problematic condition is detected in the cluster. Certain conditions can prevent updates from proceeding. These conditions are either determined by the CVO itself, or reported by individual cluster Operators that detect some details about the cluster that the Operator considers problematic for the update. The CVO records the accepted release in status.desired and creates a status.history entry about the new update. The CVO begins reconciling the manifests from the release image. Cluster Operators are updated in separate stages called Runlevels, and the CVO ensures that all Operators in a Runlevel finish updating before it proceeds to the level. Manifests for the CVO itself are applied early in the process. When the CVO deployment is applied, the current CVO pod stops, and a CVO pod that uses the new version starts. The new CVO proceeds to reconcile the remaining manifests. The update proceeds until the entire control plane is updated to the new version. Individual cluster Operators might perform update tasks on their domain of the cluster, and while they do so, they report their state through the Progressing=True condition. The Machine Config Operator (MCO) manifests are applied towards the end of the process. The updated MCO then begins updating the system configuration and operating system of every node. Each node might be drained, updated, and rebooted before it starts to accept workloads again. The cluster reports as updated after the control plane update is finished, usually before all nodes are updated. After the update, the CVO maintains all cluster resources to match the state delivered in the release image. 2.2.4. Understanding how manifests are applied during an update Some manifests supplied in a release image must be applied in a certain order because of the dependencies between them. For example, the CustomResourceDefinition resource must be created before the matching custom resources. Additionally, there is a logical order in which the individual cluster Operators must be updated to minimize disruption in the cluster. The Cluster Version Operator (CVO) implements this logical order through the concept of Runlevels. These dependencies are encoded in the filenames of the manifests in the release image: 0000_<runlevel>_<component>_<manifest-name>.yaml For example: 0000_03_config-operator_01_proxy.crd.yaml The CVO internally builds a dependency graph for the manifests, where the CVO obeys the following rules: During an update, manifests at a lower Runlevel are applied before those at a higher Runlevel. Within one Runlevel, manifests for different components can be applied in parallel. Within one Runlevel, manifests for a single component are applied in lexicographic order. The CVO then applies manifests following the generated dependency graph. Note For some resource types, the CVO monitors the resource after its manifest is applied, and considers it to be successfully updated only after the resource reaches a stable state. Achieving this state can take some time. This is especially true for ClusterOperator resources, while the CVO waits for a cluster Operator to update itself and then update its ClusterOperator status. The CVO waits until all cluster Operators in the Runlevel meet the following conditions before it proceeds to the Runlevel: The cluster Operators have an Available=True condition. The cluster Operators have a Degraded=False condition. The cluster Operators declare they have achieved the desired version in their ClusterOperator resource. Some actions can take significant time to finish. The CVO waits for the actions to complete in order to ensure the subsequent Runlevels can proceed safely. Initially reconciling the new release's manifests is expected to take 60 to 120 minutes in total; see Understanding OpenShift Container Platform update duration for more information about factors that influence update duration. In the example diagram, the CVO is waiting until all work is completed at Runlevel 20. The CVO has applied all manifests to the Operators in the Runlevel, but the kube-apiserver-operator ClusterOperator performs some actions after its new version was deployed. The kube-apiserver-operator ClusterOperator declares this progress through the Progressing=True condition and by not declaring the new version as reconciled in its status.versions . The CVO waits until the ClusterOperator reports an acceptable status, and then it will start reconciling manifests at Runlevel 25. Additional resources Understanding OpenShift Container Platform update duration 2.2.5. Understanding how the Machine Config Operator updates nodes The Machine Config Operator (MCO) applies a new machine configuration to each control plane node and compute node. During the machine configuration update, control plane nodes and compute nodes are organized into their own machine config pools, where the pools of machines are updated in parallel. The .spec.maxUnavailable parameter, which has a default value of 1 , determines how many nodes in a machine config pool can simultaneously undergo the update process. When the machine configuration update process begins, the MCO checks the amount of currently unavailable nodes in a pool. If there are fewer unavailable nodes than the value of .spec.maxUnavailable , the MCO initiates the following sequence of actions on available nodes in the pool: Cordon and drain the node Note When a node is cordoned, workloads cannot be scheduled to it. Update the system configuration and operating system (OS) of the node Reboot the node Uncordon the node A node undergoing this process is unavailable until it is uncordoned and workloads can be scheduled to it again. The MCO begins updating nodes until the number of unavailable nodes is equal to the value of .spec.maxUnavailable . As a node completes its update and becomes available, the number of unavailable nodes in the machine config pool is once again fewer than .spec.maxUnavailable . If there are remaining nodes that need to be updated, the MCO initiates the update process on a node until the .spec.maxUnavailable limit is once again reached. This process repeats until each control plane node and compute node has been updated. The following example workflow describes how this process might occur in a machine config pool with 5 nodes, where .spec.maxUnavailable is 3 and all nodes are initially available: The MCO cordons nodes 1, 2, and 3, and begins to drain them. Node 2 finishes draining, reboots, and becomes available again. The MCO cordons node 4 and begins draining it. Node 1 finishes draining, reboots, and becomes available again. The MCO cordons node 5 and begins draining it. Node 3 finishes draining, reboots, and becomes available again. Node 5 finishes draining, reboots, and becomes available again. Node 4 finishes draining, reboots, and becomes available again. Because the update process for each node is independent of other nodes, some nodes in the example above finish their update out of the order in which they were cordoned by the MCO. You can check the status of the machine configuration update by running the following command: USD oc get mcp Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-acd1358917e9f98cbdb599aea622d78b True False False 3 3 3 0 22h worker rendered-worker-1d871ac76e1951d32b2fe92369879826 False True False 2 1 1 0 22h Additional resources Machine config overview
|
[
"oc adm upgrade --include-not-recommended",
"Cluster version is 4.10.22 Upstream is unset, so the cluster will use an appropriate default. Channel: fast-4.11 (available channels: candidate-4.10, candidate-4.11, eus-4.10, fast-4.10, fast-4.11, stable-4.10) Recommended updates: VERSION IMAGE 4.10.26 quay.io/openshift-release-dev/ocp-release@sha256:e1fa1f513068082d97d78be643c369398b0e6820afab708d26acda2262940954 4.10.25 quay.io/openshift-release-dev/ocp-release@sha256:ed84fb3fbe026b3bbb4a2637ddd874452ac49c6ead1e15675f257e28664879cc 4.10.24 quay.io/openshift-release-dev/ocp-release@sha256:aab51636460b5a9757b736a29bc92ada6e6e6282e46b06e6fd483063d590d62a 4.10.23 quay.io/openshift-release-dev/ocp-release@sha256:e40e49d722cb36a95fa1c03002942b967ccbd7d68de10e003f0baa69abad457b Supported but not recommended updates: Version: 4.11.0 Image: quay.io/openshift-release-dev/ocp-release@sha256:300bce8246cf880e792e106607925de0a404484637627edf5f517375517d54a4 Recommended: False Reason: RPMOSTreeTimeout Message: Nodes with substantial numbers of containers and CPU contention may not reconcile machine configuration https://bugzilla.redhat.com/show_bug.cgi?id=2111817#c22",
"oc get clusterversion version -o json | jq '.status.availableUpdates'",
"[ { \"channels\": [ \"candidate-4.11\", \"candidate-4.12\", \"fast-4.11\", \"fast-4.12\" ], \"image\": \"quay.io/openshift-release-dev/ocp-release@sha256:400267c7f4e61c6bfa0a59571467e8bd85c9188e442cbd820cc8263809be3775\", \"url\": \"https://access.redhat.com/errata/RHBA-2023:3213\", \"version\": \"4.11.41\" }, ]",
"oc get clusterversion version -o json | jq '.status.conditionalUpdates'",
"[ { \"conditions\": [ { \"lastTransitionTime\": \"2023-05-30T16:28:59Z\", \"message\": \"The 4.11.36 release only resolves an installation issue https://issues.redhat.com//browse/OCPBUGS-11663 , which does not affect already running clusters. 4.11.36 does not include fixes delivered in recent 4.11.z releases and therefore upgrading from these versions would cause fixed bugs to reappear. Red Hat does not recommend upgrading clusters to 4.11.36 version for this reason. https://access.redhat.com/solutions/7007136\", \"reason\": \"PatchesOlderRelease\", \"status\": \"False\", \"type\": \"Recommended\" } ], \"release\": { \"channels\": [...], \"image\": \"quay.io/openshift-release-dev/ocp-release@sha256:8c04176b771a62abd801fcda3e952633566c8b5ff177b93592e8e8d2d1f8471d\", \"url\": \"https://access.redhat.com/errata/RHBA-2023:1733\", \"version\": \"4.11.36\" }, \"risks\": [...] }, ]",
"oc adm release extract <release image>",
"oc adm release extract quay.io/openshift-release-dev/ocp-release:4.12.6-x86_64 Extracted release payload from digest sha256:800d1e39d145664975a3bb7cbc6e674fbf78e3c45b5dde9ff2c5a11a8690c87b created at 2023-03-01T12:46:29Z ls 0000_03_authorization-openshift_01_rolebindingrestriction.crd.yaml 0000_03_config-operator_01_proxy.crd.yaml 0000_03_marketplace-operator_01_operatorhub.crd.yaml 0000_03_marketplace-operator_02_operatorhub.cr.yaml 0000_03_quota-openshift_01_clusterresourcequota.crd.yaml 1 0000_90_service-ca-operator_02_prometheusrolebinding.yaml 2 0000_90_service-ca-operator_03_servicemonitor.yaml 0000_99_machine-api-operator_00_tombstones.yaml image-references 3 release-metadata",
"0000_<runlevel>_<component>_<manifest-name>.yaml",
"0000_03_config-operator_01_proxy.crd.yaml",
"oc get mcp",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-acd1358917e9f98cbdb599aea622d78b True False False 3 3 3 0 22h worker rendered-worker-1d871ac76e1951d32b2fe92369879826 False True False 2 1 1 0 22h"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/updating_clusters/understanding-openshift-updates-1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.