title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
listlengths
1
5.62k
url
stringlengths
79
342
Chapter 6. Installing a private cluster on IBM Power Virtual Server
Chapter 6. Installing a private cluster on IBM Power Virtual Server In OpenShift Container Platform version 4.16, you can install a private cluster into an existing VPC and IBM Power(R) Virtual Server Workspace. The installation program provisions the rest of the required infrastructure, which you can further customize. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. 6.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an IBM Cloud(R) account to host the cluster. If you use a firewall, you configured it to allow the sites that your cluster requires access to. You configured the ccoctl utility before you installed the cluster. For more information, see Configuring the Cloud Credential Operator utility . 6.2. Private clusters You can deploy a private OpenShift Container Platform cluster that does not expose external endpoints. Private clusters are accessible from only an internal network and are not visible to the internet. By default, OpenShift Container Platform is provisioned to use publicly-accessible DNS and endpoints. A private cluster sets the DNS, Ingress Controller, and API server to private when you deploy your cluster. This means that the cluster resources are only accessible from your internal network and are not visible to the internet. Important If the cluster has any public subnets, load balancer services created by administrators might be publicly accessible. To ensure cluster security, verify that these services are explicitly annotated as private. To deploy a private cluster, you must: Use existing networking that meets your requirements. Create a DNS zone using IBM Cloud(R) DNS Services and specify it as the base domain of the cluster. For more information, see "Using IBM Cloud(R) DNS Services to configure DNS resolution". Deploy from a machine that has access to: The API services for the cloud to which you provision. The hosts on the network that you provision. The internet to obtain installation media. You can use any machine that meets these access requirements and follows your company's guidelines. For example, this machine can be a bastion host on your cloud network or a machine that has access to the network through a VPN. 6.3. Private clusters in IBM Power Virtual Server To create a private cluster on IBM Power(R) Virtual Server, you must provide an existing private Virtual Private Cloud (VPC) and subnets to host the cluster. The installation program must also be able to resolve the DNS records that the cluster requires. The installation program configures the Ingress Operator and API server for only internal traffic. The cluster still requires access to internet to access the IBM Cloud(R) APIs. The following items are not required or created when you install a private cluster: Public subnets Public network load balancers, which support public Ingress A public DNS zone that matches the baseDomain for the cluster You will also need to create an IBM(R) DNS service containing a DNS zone that matches your baseDomain . Unlike standard deployments on Power VS which use IBM(R) CIS for DNS, you must use IBM(R) DNS for your DNS service. 6.3.1. Limitations Private clusters on IBM Power(R) Virtual Server are subject only to the limitations associated with the existing VPC that was used for cluster deployment. 6.4. Requirements for using your VPC You must correctly configure the existing VPC and its subnets before you install the cluster. The installation program does not create a VPC or VPC subnet in this scenario. The installation program cannot: Subdivide network ranges for the cluster to use Set route tables for the subnets Set VPC options like DHCP Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. 6.4.1. VPC validation The VPC and all of the subnets must be in an existing resource group. The cluster is deployed to this resource group. As part of the installation, specify the following in the install-config.yaml file: The name of the resource group The name of VPC The name of the VPC subnet To ensure that the subnets that you provide are suitable, the installation program confirms that all of the subnets you specify exists. Note Subnet IDs are not supported. 6.4.2. Isolation between clusters If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is reduced in the following ways: ICMP Ingress is allowed to the entire network. TCP port 22 Ingress (SSH) is allowed to the entire network. Control plane TCP 6443 Ingress (Kubernetes API) is allowed to the entire network. Control plane TCP 22623 Ingress (MCS) is allowed to the entire network. 6.5. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.16, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 6.6. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 6.7. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 6.8. Exporting the API key You must set the API key you created as a global variable; the installation program ingests the variable during startup to set the API key. Prerequisites You have created either a user API key or service ID API key for your IBM Cloud(R) account. Procedure Export your API key for your account as a global variable: USD export IBMCLOUD_API_KEY=<api_key> Important You must set the variable name exactly as specified; the installation program expects the variable name to be present during startup. 6.9. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for IBM Power(R) Virtual Server 6.9.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 6.1. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 2 16 GB 100 GB 300 Control plane RHCOS 2 16 GB 100 GB 300 Compute RHCOS 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 6.9.2. Sample customized install-config.yaml file for IBM Power Virtual Server You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com compute: 1 2 - architecture: ppc64le hyperthreading: Enabled 3 name: worker platform: powervs: smtLevel: 8 4 replicas: 3 controlPlane: 5 6 architecture: ppc64le hyperthreading: Enabled 7 name: master platform: powervs: smtLevel: 8 8 replicas: 3 metadata: creationTimestamp: null name: example-private-cluster-name networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 machineNetwork: - cidr: 192.168.0.0/24 networkType: OVNKubernetes 10 serviceNetwork: - 172.30.0.0/16 platform: powervs: userID: ibm-user-id powervsResourceGroup: "ibmcloud-resource-group" region: powervs-region vpcName: name-of-existing-vpc 11 vpcSubnets: - powervs-region-example-subnet-1 vpcRegion : vpc-region zone: powervs-zone serviceInstanceGUID: "powervs-region-service-instance-guid" publish: Internal 12 pullSecret: '{"auths": ...}' 13 sshKey: ssh-ed25519 AAAA... 14 1 5 If you do not provide these parameters and values, the installation program provides the default value. 2 6 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Both sections currently define a single machine pool. Only one control plane pool is used. 3 7 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. 4 8 The smtLevel specifies the level of SMT to set to the control plane and compute machines. The supported values are 1, 2, 4, 8, 'off' and 'on' . The default value is 8. The smtLevel 'off' sets SMT to off and smtlevel 'on' sets SMT to the default value 8 on the cluster nodes. Note When simultaneous multithreading (SMT), or hyperthreading is not enabled, one vCPU is equivalent to one physical core. When enabled, total vCPUs is computed as (Thread(s) per core * Core(s) per socket) * Socket(s). The smtLevel controls the threads per core. Lower SMT levels may require additional assigned cores when deploying the cluster nodes. You can do this by setting the 'processors' parameter in the install-config.yaml file to an appropriate value to meet the requirements for deploying OpenShift Container Platform successfully. 9 The machine CIDR must contain the subnets for the compute machines and control plane machines. 10 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 11 Specify the name of an existing VPC. 12 Specify how to publish the user-facing endpoints of your cluster. Set publish to Internal to deploy a private cluster. 13 Required. The installation program prompts you for this value. 14 Provide the sshKey value that you use to access the machines in your cluster. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 6.9.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 6.10. Manually creating IAM Installing the cluster requires that the Cloud Credential Operator (CCO) operate in manual mode. While the installation program configures the CCO for manual mode, you must specify the identity and access management secrets for you cloud provider. You can use the Cloud Credential Operator (CCO) utility ( ccoctl ) to create the required IBM Cloud(R) resources. Prerequisites You have configured the ccoctl binary. You have an existing install-config.yaml file. Procedure Edit the install-config.yaml configuration file so that it contains the credentialsMode parameter set to Manual . Example install-config.yaml configuration file apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: ppc64le hyperthreading: Enabled 1 This line is added to set the credentialsMode parameter to Manual . To generate the manifests, run the following command from the directory that contains the installation program: USD ./openshift-install create manifests --dir <installation_directory> From the directory that contains the installation program, set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: "1.0" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer Create the service ID for each credential request, assign the policies defined, create an API key, and generate the secret: USD ccoctl ibmcloud create-service-id \ --credentials-requests-dir=<path_to_credential_requests_directory> \ 1 --name=<cluster_name> \ 2 --output-dir=<installation_directory> \ 3 --resource-group-name=<resource_group_name> 4 1 Specify the directory containing the files for the component CredentialsRequest objects. 2 Specify the name of the OpenShift Container Platform cluster. 3 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 4 Optional: Specify the name of the resource group used for scoping the access policies. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. If an incorrect resource group name is provided, the installation fails during the bootstrap phase. To find the correct resource group name, run the following command: USD grep resourceGroup <installation_directory>/manifests/cluster-infrastructure-02-config.yml Verification Ensure that the appropriate secrets were generated in your cluster's manifests directory. 6.11. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 6.12. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.16. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.16 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 6.13. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources Accessing the web console 6.14. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.16, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources About remote health monitoring 6.15. steps Customize your cluster Optional: Opt out of remote health reporting
[ "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "export IBMCLOUD_API_KEY=<api_key>", "mkdir <installation_directory>", "apiVersion: v1 baseDomain: example.com compute: 1 2 - architecture: ppc64le hyperthreading: Enabled 3 name: worker platform: powervs: smtLevel: 8 4 replicas: 3 controlPlane: 5 6 architecture: ppc64le hyperthreading: Enabled 7 name: master platform: powervs: smtLevel: 8 8 replicas: 3 metadata: creationTimestamp: null name: example-private-cluster-name networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 machineNetwork: - cidr: 192.168.0.0/24 networkType: OVNKubernetes 10 serviceNetwork: - 172.30.0.0/16 platform: powervs: userID: ibm-user-id powervsResourceGroup: \"ibmcloud-resource-group\" region: powervs-region vpcName: name-of-existing-vpc 11 vpcSubnets: - powervs-region-example-subnet-1 vpcRegion : vpc-region zone: powervs-zone serviceInstanceGUID: \"powervs-region-service-instance-guid\" publish: Internal 12 pullSecret: '{\"auths\": ...}' 13 sshKey: ssh-ed25519 AAAA... 14", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: ppc64le hyperthreading: Enabled", "./openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: \"1.0\" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer", "ccoctl ibmcloud create-service-id --credentials-requests-dir=<path_to_credential_requests_directory> \\ 1 --name=<cluster_name> \\ 2 --output-dir=<installation_directory> \\ 3 --resource-group-name=<resource_group_name> 4", "grep resourceGroup <installation_directory>/manifests/cluster-infrastructure-02-config.yml", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/installing_on_ibm_power_virtual_server/installing-ibm-power-vs-private-cluster
Appendix A. General configuration options
Appendix A. General configuration options These are the general configuration options for Ceph. Note Typically, these will be set automatically by deployment tools, such as Ansible. fsid Description The file system ID. One per cluster. Type UUID Required No. Default N/A. Usually generated by deployment tools. admin_socket Description The socket for executing administrative commands on a daemon, irrespective of whether Ceph monitors have established a quorum. Type String Required No Default /var/run/ceph/USDcluster-USDname.asok pid_file Description The file in which the monitor or OSD will write its PID. For instance, /var/run/USDcluster/USDtype.USDid.pid will create /var/run/ceph/mon.a.pid for the mon with id a running in the ceph cluster. The pid file is removed when the daemon stops gracefully. If the process is not daemonized (meaning it runs with the -f or -d option), the pid file is not created. Type String Required No Default No chdir Description The directory Ceph daemons change to once they are up and running. Default / directory recommended. Type String Required No Default / max_open_files Description If set, when the Red Hat Ceph Storage cluster starts, Ceph sets the max_open_fds at the OS level (that is, the max # of file descriptors). It helps prevents Ceph OSDs from running out of file descriptors. Type 64-bit Integer Required No Default 0 fatal_signal_handlers Description If set, we will install signal handlers for SEGV, ABRT, BUS, ILL, FPE, XCPU, XFSZ, SYS signals to generate a useful log message. Type Boolean Default true
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/configuration_guide/general-configuration-options_conf
Chapter 10. Atmos Component
Chapter 10. Atmos Component Available as of Camel version 2.15 Camel-Atmos is an Apache Camel component that allows you to work with ViPR object data services using the Atmos Client . from("atmos:foo/get?remotePath=/path").to("mock:test"); 10.1. Options The Atmos component supports 5 options, which are listed below. Name Description Default Type fullTokenId (security) The token id to pass to the Atmos client String secretKey (security) The secret key to pass to the Atmos client String uri (advanced) The URI of the server for the Atmos client to connect to String sslValidation (security) Whether the Atmos client should perform SSL validation false boolean resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The Atmos endpoint is configured using URI syntax: with the following path and query parameters: 10.1.1. Path Parameters (2 parameters): Name Description Default Type name Atmos name String operation Required Operation to perform AtmosOperation 10.1.2. Query Parameters (12 parameters): Name Description Default Type enableSslValidation (common) Atmos SSL validation false boolean fullTokenId (common) Atmos client fullTokenId String localPath (common) Local path to put files String newRemotePath (common) New path on Atmos when moving files String query (common) Search query on Atmos String remotePath (common) Where to put files on Atmos String secretKey (common) Atmos shared secret String uri (common) Atomos server uri String bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean 10.2. Spring Boot Auto-Configuration The component supports 6 options, which are listed below. Name Description Default Type camel.component.atmos.enabled Enable atmos component true Boolean camel.component.atmos.full-token-id The token id to pass to the Atmos client String camel.component.atmos.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean camel.component.atmos.secret-key The secret key to pass to the Atmos client String camel.component.atmos.ssl-validation Whether the Atmos client should perform SSL validation false Boolean camel.component.atmos.uri The URI of the server for the Atmos client to connect to String 10.3. Dependencies To use Atmos in your camel routes you need to add the dependency on camel-atmos which implements this data format. If you use maven you could just add the following to your pom.xml, substituting the version number for the latest & greatest release (see the download page for the latest versions). <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-atmos</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> 10.4. Integrations When you look at atmos integrations, there is one type of consumer, GetConsumer, which is a type of ScheduledPollConsumer. Get Whereas there are 4 types of producers which are Get Del Move Put 10.5. Examples These example are taken from tests: from("atmos:foo/get?remotePath=/path").to("mock:test"); Here, this is a consumer example. remotePath represents the path from where the data will be read and passes the camel exchange to regarding producer Underneath, this component uses atmos client API for this and every other operations. from("direct:start") .to("atmos://get?remotePath=/dummy/dummy.txt") .to("mock:result"); Here, this a producer sample. remotePath represents the path where the operations occur on ViPR object data service. In producers, operations( Get , Del , Move , Put ) run on ViPR object data services and results are set on headers of camel exchange. Regarding the operations, the following headers are set on camel exhange DOWNLOADED_FILE, DOWNLOADED_FILES, UPLOADED_FILE, UPLOADED_FILES, FOUND_FILES, DELETED_PATH, MOVED_PATH; 10.6. See Also Configuring Camel Component Endpoint Getting Started
[ "from(\"atmos:foo/get?remotePath=/path\").to(\"mock:test\");", "atmos:name/operation", "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-atmos</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>", "from(\"atmos:foo/get?remotePath=/path\").to(\"mock:test\");", "from(\"direct:start\") .to(\"atmos://get?remotePath=/dummy/dummy.txt\") .to(\"mock:result\");", "DOWNLOADED_FILE, DOWNLOADED_FILES, UPLOADED_FILE, UPLOADED_FILES, FOUND_FILES, DELETED_PATH, MOVED_PATH;" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/atmos-component
Chapter 4. Configuring the instrumentation
Chapter 4. Configuring the instrumentation The Red Hat build of OpenTelemetry Operator uses an Instrumentation custom resource that defines the configuration of the instrumentation. 4.1. Auto-instrumentation in the Red Hat build of OpenTelemetry Operator Auto-instrumentation in the Red Hat build of OpenTelemetry Operator can automatically instrument an application without manual code changes. Developers and administrators can monitor applications with minimal effort and changes to the existing codebase. Auto-instrumentation runs as follows: The Red Hat build of OpenTelemetry Operator injects an init-container, or a sidecar container for Go, to add the instrumentation libraries for the programming language of the instrumented application. The Red Hat build of OpenTelemetry Operator sets the required environment variables in the application's runtime environment. These variables configure the auto-instrumentation libraries to collect traces, metrics, and logs and send them to the appropriate OpenTelemetry Collector or another telemetry backend. The injected libraries automatically instrument your application by connecting to known frameworks and libraries, such as web servers or database clients, to collect telemetry data. The source code of the instrumented application is not modified. Once the application is running with the injected instrumentation, the application automatically generates telemetry data, which is sent to a designated OpenTelemetry Collector or an external OTLP endpoint for further processing. Auto-instrumentation enables you to start collecting telemetry data quickly without having to manually integrate the OpenTelemetry SDK into your application code. However, some applications might require specific configurations or custom manual instrumentation. 4.2. OpenTelemetry instrumentation configuration options The Red Hat build of OpenTelemetry can inject and configure the OpenTelemetry auto-instrumentation libraries into your workloads. Currently, the project supports injection of the instrumentation libraries from Go, Java, Node.js, Python, .NET, and the Apache HTTP Server ( httpd ). Important The Red Hat build of OpenTelemetry Operator only supports the injection mechanism of the instrumentation libraries but does not support instrumentation libraries or upstream images. Customers can build their own instrumentation images or use community images. 4.2.1. Instrumentation options Instrumentation options are specified in an Instrumentation custom resource (CR). Sample Instrumentation CR apiVersion: opentelemetry.io/v1alpha1 kind: Instrumentation metadata: name: java-instrumentation spec: env: - name: OTEL_EXPORTER_OTLP_TIMEOUT value: "20" exporter: endpoint: http://production-collector.observability.svc.cluster.local:4317 propagators: - w3c sampler: type: parentbased_traceidratio argument: "0.25" java: env: - name: OTEL_JAVAAGENT_DEBUG value: "true" Table 4.1. Parameters used by the Operator to define the Instrumentation Parameter Description Values env Common environment variables to define across all the instrumentations. exporter Exporter configuration. propagators Propagators defines inter-process context propagation configuration. tracecontext , baggage , b3 , b3multi , jaeger , ottrace , none resource Resource attributes configuration. sampler Sampling configuration. apacheHttpd Configuration for the Apache HTTP Server instrumentation. dotnet Configuration for the .NET instrumentation. go Configuration for the Go instrumentation. java Configuration for the Java instrumentation. nodejs Configuration for the Node.js instrumentation. python Configuration for the Python instrumentation. Table 4.2. Default protocol for auto-instrumentation Auto-instrumentation Default protocol Java 1.x otlp/grpc Java 2.x otlp/http Python otlp/http .NET otlp/http Go otlp/http Apache HTTP Server otlp/grpc 4.2.2. Configuration of the OpenTelemetry SDK variables You can use the instrumentation.opentelemetry.io/inject-sdk annotation in the OpenTelemetry Collector custom resource to instruct the Red Hat build of OpenTelemetry Operator to inject some of the following OpenTelemetry SDK environment variables, depending on the Instrumentation CR, into your pod: OTEL_SERVICE_NAME OTEL_TRACES_SAMPLER OTEL_TRACES_SAMPLER_ARG OTEL_PROPAGATORS OTEL_RESOURCE_ATTRIBUTES OTEL_EXPORTER_OTLP_ENDPOINT OTEL_EXPORTER_OTLP_CERTIFICATE OTEL_EXPORTER_OTLP_CLIENT_CERTIFICATE OTEL_EXPORTER_OTLP_CLIENT_KEY Table 4.3. Values for the instrumentation.opentelemetry.io/inject-sdk annotation Value Description "true" Injects the Instrumentation resource with the default name from the current namespace. "false" Injects no Instrumentation resource. "<instrumentation_name>" Specifies the name of the Instrumentation resource to inject from the current namespace. "<namespace>/<instrumentation_name>" Specifies the name of the Instrumentation resource to inject from another namespace. 4.2.3. Exporter configuration Although the Instrumentation custom resource supports setting up one or more exporters per signal, auto-instrumentation configures only the OTLP Exporter. So you must configure the endpoint to point to the OTLP Receiver on the Collector. Sample exporter TLS CA configuration using a config map apiVersion: opentelemetry.io/v1alpha1 kind: Instrumentation # ... spec # ... exporter: endpoint: https://production-collector.observability.svc.cluster.local:4317 1 tls: configMapName: ca-bundle 2 ca_file: service-ca.crt 3 # ... 1 Specifies the OTLP endpoint using the HTTPS scheme and TLS. 2 Specifies the name of the config map. The config map must already exist in the namespace of the pod injecting the auto-instrumentation. 3 Points to the CA certificate in the config map or the absolute path to the certificate if the certificate is already present in the workload file system. Sample exporter mTLS configuration using a Secret apiVersion: opentelemetry.io/v1alpha1 kind: Instrumentation # ... spec # ... exporter: endpoint: https://production-collector.observability.svc.cluster.local:4317 1 tls: secretName: serving-certs 2 ca_file: service-ca.crt 3 cert_file: tls.crt 4 key_file: tls.key 5 # ... 1 Specifies the OTLP endpoint using the HTTPS scheme and TLS. 2 Specifies the name of the Secret for the ca_file , cert_file , and key_file values. The Secret must already exist in the namespace of the pod injecting the auto-instrumentation. 3 Points to the CA certificate in the Secret or the absolute path to the certificate if the certificate is already present in the workload file system. 4 Points to the client certificate in the Secret or the absolute path to the certificate if the certificate is already present in the workload file system. 5 Points to the client key in the Secret or the absolute path to a key if the key is already present in the workload file system. Note You can provide the CA certificate in a config map or Secret. If you provide it in both, the config map takes higher precedence than the Secret. Example configuration for CA bundle injection by using a config map and Instrumentation CR apiVersion: v1 kind: ConfigMap metadata: name: otelcol-cabundle namespace: tutorial-application annotations: service.beta.openshift.io/inject-cabundle: "true" # ... --- apiVersion: opentelemetry.io/v1alpha1 kind: Instrumentation metadata: name: my-instrumentation spec: exporter: endpoint: https://simplest-collector.tracing-system.svc.cluster.local:4317 tls: configMapName: otelcol-cabundle ca: service-ca.crt # ... 4.2.4. Configuration of the Apache HTTP Server auto-instrumentation Important The Apache HTTP Server auto-instrumentation is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Table 4.4. Parameters for the .spec.apacheHttpd field Name Description Default attrs Attributes specific to the Apache HTTP Server. configPath Location of the Apache HTTP Server configuration. /usr/local/apache2/conf env Environment variables specific to the Apache HTTP Server. image Container image with the Apache SDK and auto-instrumentation. resourceRequirements The compute resource requirements. version Apache HTTP Server version. 2.4 The PodSpec annotation to enable injection instrumentation.opentelemetry.io/inject-apache-httpd: "true" 4.2.5. Configuration of the .NET auto-instrumentation Important The .NET auto-instrumentation is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Important By default, this feature injects unsupported, upstream instrumentation libraries. Name Description env Environment variables specific to .NET. image Container image with the .NET SDK and auto-instrumentation. resourceRequirements The compute resource requirements. For the .NET auto-instrumentation, the required OTEL_EXPORTER_OTLP_ENDPOINT environment variable must be set if the endpoint of the exporters is set to 4317 . The .NET autoinstrumentation uses http/proto by default, and the telemetry data must be set to the 4318 port. The PodSpec annotation to enable injection instrumentation.opentelemetry.io/inject-dotnet: "true" 4.2.6. Configuration of the Go auto-instrumentation Important The Go auto-instrumentation is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Important By default, this feature injects unsupported, upstream instrumentation libraries. Name Description env Environment variables specific to Go. image Container image with the Go SDK and auto-instrumentation. resourceRequirements The compute resource requirements. The PodSpec annotation to enable injection instrumentation.opentelemetry.io/inject-go: "true" Additional permissions required for the Go auto-instrumentation in the OpenShift cluster apiVersion: security.openshift.io/v1 kind: SecurityContextConstraints metadata: name: otel-go-instrumentation-scc allowHostDirVolumePlugin: true allowPrivilegeEscalation: true allowPrivilegedContainer: true allowedCapabilities: - "SYS_PTRACE" fsGroup: type: RunAsAny runAsUser: type: RunAsAny seLinuxContext: type: RunAsAny seccompProfiles: - '*' supplementalGroups: type: RunAsAny Tip The CLI command for applying the permissions for the Go auto-instrumentation in the OpenShift cluster is as follows: USD oc adm policy add-scc-to-user otel-go-instrumentation-scc -z <service_account> 4.2.7. Configuration of the Java auto-instrumentation Important The Java auto-instrumentation is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Important By default, this feature injects unsupported, upstream instrumentation libraries. Name Description env Environment variables specific to Java. image Container image with the Java SDK and auto-instrumentation. resourceRequirements The compute resource requirements. The PodSpec annotation to enable injection instrumentation.opentelemetry.io/inject-java: "true" 4.2.8. Configuration of the Node.js auto-instrumentation Important The Node.js auto-instrumentation is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Important By default, this feature injects unsupported, upstream instrumentation libraries. Name Description env Environment variables specific to Node.js. image Container image with the Node.js SDK and auto-instrumentation. resourceRequirements The compute resource requirements. The PodSpec annotations to enable injection instrumentation.opentelemetry.io/inject-nodejs: "true" instrumentation.opentelemetry.io/otel-go-auto-target-exe: "/path/to/container/executable" The instrumentation.opentelemetry.io/otel-go-auto-target-exe annotation sets the value for the required OTEL_GO_AUTO_TARGET_EXE environment variable. 4.2.9. Configuration of the Python auto-instrumentation Important The Python auto-instrumentation is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Important By default, this feature injects unsupported, upstream instrumentation libraries. Name Description env Environment variables specific to Python. image Container image with the Python SDK and auto-instrumentation. resourceRequirements The compute resource requirements. For Python auto-instrumentation, the OTEL_EXPORTER_OTLP_ENDPOINT environment variable must be set if the endpoint of the exporters is set to 4317 . Python auto-instrumentation uses http/proto by default, and the telemetry data must be set to the 4318 port. The PodSpec annotation to enable injection instrumentation.opentelemetry.io/inject-python: "true" 4.2.10. Multi-container pods The instrumentation is run on the first container that is available by default according to the pod specification. In some cases, you can also specify target containers for injection. Pod annotation instrumentation.opentelemetry.io/container-names: "<container_1>,<container_2>" Note The Go auto-instrumentation does not support multi-container auto-instrumentation injection. 4.2.11. Multi-container pods with multiple instrumentations Injecting instrumentation for an application language to one or more containers in a multi-container pod requires the following annotation: instrumentation.opentelemetry.io/<application_language>-container-names: "<container_1>,<container_2>" 1 1 You can inject instrumentation for only one language per container. For the list of supported <application_language> values, see the following table. Table 4.5. Supported values for the <application_language> Language Value for <application_language> ApacheHTTPD apache DotNet dotnet Java java NGINX inject-nginx NodeJS nodejs Python python SDK sdk 4.2.12. Using the instrumentation CR with Service Mesh When using the instrumentation custom resource (CR) with Red Hat OpenShift Service Mesh, you must use the b3multi propagator.
[ "apiVersion: opentelemetry.io/v1alpha1 kind: Instrumentation metadata: name: java-instrumentation spec: env: - name: OTEL_EXPORTER_OTLP_TIMEOUT value: \"20\" exporter: endpoint: http://production-collector.observability.svc.cluster.local:4317 propagators: - w3c sampler: type: parentbased_traceidratio argument: \"0.25\" java: env: - name: OTEL_JAVAAGENT_DEBUG value: \"true\"", "apiVersion: opentelemetry.io/v1alpha1 kind: Instrumentation spec exporter: endpoint: https://production-collector.observability.svc.cluster.local:4317 1 tls: configMapName: ca-bundle 2 ca_file: service-ca.crt 3", "apiVersion: opentelemetry.io/v1alpha1 kind: Instrumentation spec exporter: endpoint: https://production-collector.observability.svc.cluster.local:4317 1 tls: secretName: serving-certs 2 ca_file: service-ca.crt 3 cert_file: tls.crt 4 key_file: tls.key 5", "apiVersion: v1 kind: ConfigMap metadata: name: otelcol-cabundle namespace: tutorial-application annotations: service.beta.openshift.io/inject-cabundle: \"true\" --- apiVersion: opentelemetry.io/v1alpha1 kind: Instrumentation metadata: name: my-instrumentation spec: exporter: endpoint: https://simplest-collector.tracing-system.svc.cluster.local:4317 tls: configMapName: otelcol-cabundle ca: service-ca.crt", "instrumentation.opentelemetry.io/inject-apache-httpd: \"true\"", "instrumentation.opentelemetry.io/inject-dotnet: \"true\"", "instrumentation.opentelemetry.io/inject-go: \"true\"", "apiVersion: security.openshift.io/v1 kind: SecurityContextConstraints metadata: name: otel-go-instrumentation-scc allowHostDirVolumePlugin: true allowPrivilegeEscalation: true allowPrivilegedContainer: true allowedCapabilities: - \"SYS_PTRACE\" fsGroup: type: RunAsAny runAsUser: type: RunAsAny seLinuxContext: type: RunAsAny seccompProfiles: - '*' supplementalGroups: type: RunAsAny", "oc adm policy add-scc-to-user otel-go-instrumentation-scc -z <service_account>", "instrumentation.opentelemetry.io/inject-java: \"true\"", "instrumentation.opentelemetry.io/inject-nodejs: \"true\" instrumentation.opentelemetry.io/otel-go-auto-target-exe: \"/path/to/container/executable\"", "instrumentation.opentelemetry.io/inject-python: \"true\"", "instrumentation.opentelemetry.io/container-names: \"<container_1>,<container_2>\"", "instrumentation.opentelemetry.io/<application_language>-container-names: \"<container_1>,<container_2>\" 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/red_hat_build_of_opentelemetry/otel-configuration-of-instrumentation
Chapter 26. dns
Chapter 26. dns This chapter describes the commands under the dns command. 26.1. dns quota list List quotas Usage: Table 26.1. Command arguments Value Summary -h, --help Show this help message and exit --all-projects Show results from all projects. default: false --edit-managed Edit resources marked as managed. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None --project-id PROJECT_ID Project id default: current project Table 26.2. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 26.3. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 26.4. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 26.5. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 26.2. dns quota reset Reset quotas Usage: Table 26.6. Command arguments Value Summary -h, --help Show this help message and exit --all-projects Show results from all projects. default: false --edit-managed Edit resources marked as managed. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None --project-id PROJECT_ID Project id 26.3. dns quota set Set quotas Usage: Table 26.7. Command arguments Value Summary -h, --help Show this help message and exit --all-projects Show results from all projects. default: false --edit-managed Edit resources marked as managed. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None --project-id PROJECT_ID Project id --api-export-size <api-export-size> New value for the api-export-size quota --recordset-records <recordset-records> New value for the recordset-records quota --zone-records <zone-records> New value for the zone-records quota --zone-recordsets <zone-recordsets> New value for the zone-recordsets quota --zones <zones> New value for the zones quota Table 26.8. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 26.9. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 26.10. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 26.11. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 26.4. dns service list List service statuses Usage: Table 26.12. Command arguments Value Summary -h, --help Show this help message and exit --hostname HOSTNAME Hostname --service_name SERVICE_NAME Service name --status STATUS Status --all-projects Show results from all projects. default: false --edit-managed Edit resources marked as managed. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None Table 26.13. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 26.14. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 26.15. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 26.16. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 26.5. dns service show Show service status details Usage: Table 26.17. Positional arguments Value Summary id Service status id Table 26.18. Command arguments Value Summary -h, --help Show this help message and exit --all-projects Show results from all projects. default: false --edit-managed Edit resources marked as managed. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None Table 26.19. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 26.20. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 26.21. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 26.22. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show.
[ "openstack dns quota list [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--all-projects] [--edit-managed] [--sudo-project-id SUDO_PROJECT_ID] [--project-id PROJECT_ID]", "openstack dns quota reset [-h] [--all-projects] [--edit-managed] [--sudo-project-id SUDO_PROJECT_ID] [--project-id PROJECT_ID]", "openstack dns quota set [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--all-projects] [--edit-managed] [--sudo-project-id SUDO_PROJECT_ID] [--project-id PROJECT_ID] [--api-export-size <api-export-size>] [--recordset-records <recordset-records>] [--zone-records <zone-records>] [--zone-recordsets <zone-recordsets>] [--zones <zones>]", "openstack dns service list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--hostname HOSTNAME] [--service_name SERVICE_NAME] [--status STATUS] [--all-projects] [--edit-managed] [--sudo-project-id SUDO_PROJECT_ID]", "openstack dns service show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--all-projects] [--edit-managed] [--sudo-project-id SUDO_PROJECT_ID] id" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/command_line_interface_reference/dns
Chapter 14. Allowing JavaScript-based access to the API server from additional hosts
Chapter 14. Allowing JavaScript-based access to the API server from additional hosts 14.1. Allowing JavaScript-based access to the API server from additional hosts The default OpenShift Container Platform configuration only allows the web console to send requests to the API server. If you need to access the API server or OAuth server from a JavaScript application using a different hostname, you can configure additional hostnames to allow. Prerequisites Access to the cluster as a user with the cluster-admin role. Procedure Edit the APIServer resource: USD oc edit apiserver.config.openshift.io cluster Add the additionalCORSAllowedOrigins field under the spec section and specify one or more additional hostnames: apiVersion: config.openshift.io/v1 kind: APIServer metadata: annotations: release.openshift.io/create-only: "true" creationTimestamp: "2019-07-11T17:35:37Z" generation: 1 name: cluster resourceVersion: "907" selfLink: /apis/config.openshift.io/v1/apiservers/cluster uid: 4b45a8dd-a402-11e9-91ec-0219944e0696 spec: additionalCORSAllowedOrigins: - (?i)//my\.subdomain\.domain\.com(:|\z) 1 1 The hostname is specified as a Golang regular expression that matches against CORS headers from HTTP requests against the API server and OAuth server. Note This example uses the following syntax: The (?i) makes it case-insensitive. The // pins to the beginning of the domain and matches the double slash following http: or https: . The \. escapes dots in the domain name. The (:|\z) matches the end of the domain name (\z) or a port separator (:) . Save the file to apply the changes.
[ "oc edit apiserver.config.openshift.io cluster", "apiVersion: config.openshift.io/v1 kind: APIServer metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: \"2019-07-11T17:35:37Z\" generation: 1 name: cluster resourceVersion: \"907\" selfLink: /apis/config.openshift.io/v1/apiservers/cluster uid: 4b45a8dd-a402-11e9-91ec-0219944e0696 spec: additionalCORSAllowedOrigins: - (?i)//my\\.subdomain\\.domain\\.com(:|\\z) 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/security_and_compliance/allowing-javascript-based-access-api-server
Chapter 17. Generating CRL on the IdM CA server
Chapter 17. Generating CRL on the IdM CA server If your IdM deployment uses an embedded certificate authority (CA), you may need to move generating the Certificate Revocation List (CRL) from one Identity Management (IdM) server to another. It can be necessary, for example, when you want to migrate the server to another system. Only configure one server to generate the CRL. The IdM server that performs the CRL publisher role is usually the same server that performs the CA renewal server role, but this is not mandatory. Before you decommission the CRL publisher server, select and configure another server to perform the CRL publisher server role. 17.1. Stopping CRL generation on an IdM server To stop generating the Certificate Revocation List (CRL) on the IdM CRL publisher server, use the ipa-crlgen-manage command. Before you disable the generation, verify that the server really generates CRL. You can then disable it. Prerequisites You must be logged in as root. Procedure Check if your server is generating the CRL: Stop generating the CRL on the server: Check if the server stopped generating CRL: The server stopped generating the CRL. The step is to enable CRL generation on the IdM replica. 17.2. Starting CRL generation on an IdM replica server You can start generating the Certificate Revocation List (CRL) on an IdM CA server with the ipa-crlgen-manage command. Prerequisites The RHEL system must be an IdM Certificate Authority server. You must be logged in as root. Procedure Start generating the CRL: Check if the CRL is generated: 17.3. Changing the CRL update interval The Certificate Revocation List (CRL) file is automatically generated by the Identity Management Certificate Authority (Idm CA) every four hours by default. You can change this interval with the following procedure. Procedure Stop the CRL generation server: Open the /var/lib/pki/pki-tomcat/conf/ca/CS.cfg file, and change the ca.crl.MasterCRL.autoUpdateInterval value to the new interval setting. For example, to generate the CRL every 60 minutes: Note If you update the ca.crl.MasterCRL.autoUpdateInterval parameter, the change will become effective after the already scheduled CRL update. Start the CRL generation server: Additional resources For more information about the CRL generation on an IdM replica server, see Starting CRL generation on an IdM replica server .
[ "ipa-crlgen-manage status CRL generation: enabled Last CRL update: 2019-10-31 12:00:00 Last CRL Number: 6 The ipa-crlgen-manage command was successful", "ipa-crlgen-manage disable Stopping pki-tomcatd Editing /var/lib/pki/pki-tomcat/conf/ca/CS.cfg Starting pki-tomcatd Editing /etc/httpd/conf.d/ipa-pki-proxy.conf Restarting httpd CRL generation disabled on the local host. Please make sure to configure CRL generation on another master with ipa-crlgen-manage enable. The ipa-crlgen-manage command was successful", "ipa-crlgen-manage status", "ipa-crlgen-manage enable Stopping pki-tomcatd Editing /var/lib/pki/pki-tomcat/conf/ca/CS.cfg Starting pki-tomcatd Editing /etc/httpd/conf.d/ipa-pki-proxy.conf Restarting httpd Forcing CRL update CRL generation enabled on the local host. Please make sure to have only a single CRL generation master. The ipa-crlgen-manage command was successful", "ipa-crlgen-manage status CRL generation: enabled Last CRL update: 2019-10-31 12:10:00 Last CRL Number: 7 The ipa-crlgen-manage command was successful", "systemctl stop [email protected]", "ca.crl.MasterCRL.autoUpdateInterval= 60", "systemctl start [email protected]" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_certificates_in_idm/generating-crl-on-the-idm-ca-server_managing-certificates-in-idm
Chapter 6. Verifying the integrity of back-end databases
Chapter 6. Verifying the integrity of back-end databases The Directory Server database integrity check can detect problems, such as corrupt metadata pages and the sorting of duplicate keys. If problems are found, you can, depending on the problems, re-index the database or restore a backup. 6.1. Performing a database integrity check The dsctl dbverify command enables administrators to verify the integrity of back-end databases. Procedure Optional: List the back-end databases of the instance: # dsconf -D " cn=Directory Manager " ldap://server.example.com backend suffix list dc=example,dc=com (userRoot) Stop the instance: # dsctl instance_name stop Verify the database. For example, to verify the userRoot database, enter: # dsctl instance_name dbverify userRoot [04/Feb/2022:13:11:02.453624171 +0100] - INFO - ldbm_instance_config_cachememsize_set - force a minimal value 512000 [04/Feb/2022:13:11:02.465339507 +0100] - WARN - ldbm_instance_add_instance_entry_callback - ldbm instance userroot already exists [04/Feb/2022:13:11:02.468060144 +0100] - ERR - ldbm_config_read_instance_entries - Failed to add instance entry cn= userroot ,cn=ldbm database,cn=plugins,cn=config [04/Feb/2022:13:11:02.471079045 +0100] - ERR - bdb_config_load_dse_info - failed to read instance entries [04/Feb/2022:13:11:02.476173304 +0100] - ERR - libdb - BDB0522 Page 0: metadata page corrupted [04/Feb/2022:13:11:02.481684604 +0100] - ERR - libdb - BDB0523 Page 0: could not check metadata page [04/Feb/2022:13:11:02.484113053 +0100] - ERR - libdb - /var/lib/dirsrv/slapd-instance_name/db/userroot/entryrdn.db: BDB0090 DB_VERIFY_BAD: Database verification failed [04/Feb/2022:13:11:02.486449603 +0100] - ERR - dbverify_ext - verify failed(-30970): /var/lib/dirsrv/slapd- instance_name /db/ userroot /entryrdn.db dbverify failed If the verification process reported any problems, fix them manually or restore a backup. Start the instance: # dsctl instance_name start Additional resources Restoring Directory Server
[ "dsconf -D \" cn=Directory Manager \" ldap://server.example.com backend suffix list dc=example,dc=com (userRoot)", "dsctl instance_name stop", "dsctl instance_name dbverify userRoot [04/Feb/2022:13:11:02.453624171 +0100] - INFO - ldbm_instance_config_cachememsize_set - force a minimal value 512000 [04/Feb/2022:13:11:02.465339507 +0100] - WARN - ldbm_instance_add_instance_entry_callback - ldbm instance userroot already exists [04/Feb/2022:13:11:02.468060144 +0100] - ERR - ldbm_config_read_instance_entries - Failed to add instance entry cn= userroot ,cn=ldbm database,cn=plugins,cn=config [04/Feb/2022:13:11:02.471079045 +0100] - ERR - bdb_config_load_dse_info - failed to read instance entries [04/Feb/2022:13:11:02.476173304 +0100] - ERR - libdb - BDB0522 Page 0: metadata page corrupted [04/Feb/2022:13:11:02.481684604 +0100] - ERR - libdb - BDB0523 Page 0: could not check metadata page [04/Feb/2022:13:11:02.484113053 +0100] - ERR - libdb - /var/lib/dirsrv/slapd-instance_name/db/userroot/entryrdn.db: BDB0090 DB_VERIFY_BAD: Database verification failed [04/Feb/2022:13:11:02.486449603 +0100] - ERR - dbverify_ext - verify failed(-30970): /var/lib/dirsrv/slapd- instance_name /db/ userroot /entryrdn.db dbverify failed", "dsctl instance_name start" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/12/html/configuring_directory_databases/assembly_verifying-the-integrity-of-back-end-databases_configuring-directory-databases
Chapter 37. group
Chapter 37. group This chapter describes the commands under the group command. 37.1. group add user Add user to group Usage: Table 37.1. Positional Arguments Value Summary <group> Group to contain <user> (name or id) <user> User(s) to add to <group> (name or id) (repeat option to add multiple users) Table 37.2. Optional Arguments Value Summary -h, --help Show this help message and exit --group-domain <group-domain> Domain the group belongs to (name or id). this can be used in case collisions between group names exist. --user-domain <user-domain> Domain the user belongs to (name or id). this can be used in case collisions between user names exist. 37.2. group contains user Check user membership in group Usage: Table 37.3. Positional Arguments Value Summary <group> Group to check (name or id) <user> User to check (name or id) Table 37.4. Optional Arguments Value Summary -h, --help Show this help message and exit --group-domain <group-domain> Domain the group belongs to (name or id). this can be used in case collisions between group names exist. --user-domain <user-domain> Domain the user belongs to (name or id). this can be used in case collisions between user names exist. 37.3. group create Create new group Usage: Table 37.5. Positional Arguments Value Summary <group-name> New group name Table 37.6. Optional Arguments Value Summary -h, --help Show this help message and exit --domain <domain> Domain to contain new group (name or id) --description <description> New group description --or-show Return existing group Table 37.7. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 37.8. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 37.9. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 37.10. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 37.4. group delete Delete group(s) Usage: Table 37.11. Positional Arguments Value Summary <group> Group(s) to delete (name or id) Table 37.12. Optional Arguments Value Summary -h, --help Show this help message and exit --domain <domain> Domain containing group(s) (name or id) 37.5. group list List groups Usage: Table 37.13. Optional Arguments Value Summary -h, --help Show this help message and exit --domain <domain> Filter group list by <domain> (name or id) --user <user> Filter group list by <user> (name or id) --user-domain <user-domain> Domain the user belongs to (name or id). this can be used in case collisions between user names exist. --long List additional fields in output Table 37.14. Output Formatters Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 37.15. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 37.16. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 37.17. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 37.6. group remove user Remove user from group Usage: Table 37.18. Positional Arguments Value Summary <group> Group containing <user> (name or id) <user> User(s) to remove from <group> (name or id) (repeat option to remove multiple users) Table 37.19. Optional Arguments Value Summary -h, --help Show this help message and exit --group-domain <group-domain> Domain the group belongs to (name or id). this can be used in case collisions between group names exist. --user-domain <user-domain> Domain the user belongs to (name or id). this can be used in case collisions between user names exist. 37.7. group set Set group properties Usage: Table 37.20. Positional Arguments Value Summary <group> Group to modify (name or id) Table 37.21. Optional Arguments Value Summary -h, --help Show this help message and exit --domain <domain> Domain containing <group> (name or id) --name <name> New group name --description <description> New group description 37.8. group show Display group details Usage: Table 37.22. Positional Arguments Value Summary <group> Group to display (name or id) Table 37.23. Optional Arguments Value Summary -h, --help Show this help message and exit --domain <domain> Domain containing <group> (name or id) Table 37.24. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 37.25. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 37.26. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 37.27. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show.
[ "openstack group add user [-h] [--group-domain <group-domain>] [--user-domain <user-domain>] <group> <user> [<user> ...]", "openstack group contains user [-h] [--group-domain <group-domain>] [--user-domain <user-domain>] <group> <user>", "openstack group create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--domain <domain>] [--description <description>] [--or-show] <group-name>", "openstack group delete [-h] [--domain <domain>] <group> [<group> ...]", "openstack group list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--domain <domain>] [--user <user>] [--user-domain <user-domain>] [--long]", "openstack group remove user [-h] [--group-domain <group-domain>] [--user-domain <user-domain>] <group> <user> [<user> ...]", "openstack group set [-h] [--domain <domain>] [--name <name>] [--description <description>] <group>", "openstack group show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--domain <domain>] <group>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/command_line_interface_reference/group
5.170. lldpad
5.170. lldpad 5.170.1. RHBA-2012:1175 - lldpad bug fix update Updated lldpad packages that fix a bug are now available for Red Hat Enterprise Linux 6. The lldpad packages provide the Linux user space daemon and configuration tool for Intel's Link Layer Discovery Protocol (LLDP) agent with Enhanced Ethernet support. Bug Fix BZ# 844415 Previously, an error in the DCBX (Data Center Bridging Exchange) version selection logic could cause LLDPDUs (Link Layer Discovery Protocol Data Units) to be not encoded in the TLV (Type-Length Value) format during the transition from IEEE DCBX to the legacy DCBX mode. Consequently, link flaps, a delay, or a failure in synchronizing up DCBX between the host and a peer device could occur. In the case of booting from a remote FCoE (Fibre-Channel Over Ethernet) LUN (Logical Unit Number), this bug could result in a failure to boot. This update fixes the bug and TLV is now always used in the described scenario. All users of lldpad are advised to upgrade to these updated packages, which fix this bug. 5.170.2. RHBA-2012:1002 - lldpad bug fix update Updated lldpad packages that fix one bug are now available for Red Hat Enterprise Linux 6. The lldpad packages provide the Linux user space daemon and configuration tool for Intel's Link Layer Discovery Protocol (LLDP) agent with Enhanced Ethernet support. Bug Fix BZ# 828684 Previously, dcbtool commands could, under certain circumstances, fail to enable the Fibre Channel over Ethernet (FCoE) application type-length-values (TLV) for a selected interface during the installation process. Consequently, various important features might have not been enabled (for example priority flow control, or PFC) by the Data Center Bridging eXchange (DCBX) peer. To prevent such problems, application-specific parameters (such as the FCoE application TLV) in DCBX are now enabled by default. All users of lldpad are advised to upgrade to these updated packages, which fix this bug. 5.170.3. RHBA-2012:0901 - lldpad bug fix and enhancement update Updated lldpad packages that fix various bugs and provide an enhancement are now available for Red Hat Enterprise Linux 6. The lldpad package provides the Link Layer Discovery Protocol (LLDP) Linux user space daemon and associated configuration tools. It supports Intel's Link Layer Discovery Protocol (LLDP) and provides Enhanced Ethernet support. Bug Fixes BZ# 768555 The lldpad tool is initially invoked by initrd during the boot process to support Fibre Channel over Ethernet (FCoE) boot from a Storage Area Network (SAN). The runtime lldpad initscript did not kill lldpad before restarting it after system boot. Consequently, lldpad could not be started normally after system boot. In this update, lldpad init now contains the "-k" option to terminate the first instance of lldpad that was started during system boot. BZ# 803482 When the Data Center Bridging Exchange (DCBX) IEEE mode fails, it falls back to Converged Enhanced Ethernet (CEE) mode and Data Center Bridging (DCB) is enabled as part of the ifup routine. Normally, this does not occur unless either a CEE-DCBX Type-Length-Value (TLV) is received or the user explicitly enables this mode. However, in kernels released earlier than 2.6.38, DCBX IEEE mode is not supported and IEEE falls back to CEE mode immediately. Consequently, DCB was enabled in CEE mode on some kernels when IEEE mode failed, even though a peer TLV had not yet been received and the user did not manually enable it. This update fixes the logic by only enabling and advertising DCBX TLVs when a peer TLV is received. As a result, lldpad DCBX works as expected; IEEE mode is the default and CEE mode is used only if a peer CEE-DCBX TLV is received or the user enables it through the command line. BZ# 811422 A user may use dcbtool commands to clear the advertise bits on CEE-DCBX feature attributes (such as PFC, PG, APP). However, the user settings were lost during ifdown and ifup sequences and the default values were restored. This update fixes the problem so that the values are only set to defaults if the user has not explicitly enabled them. Enhancement BZ# 812202 When a switch disassociated a connection for a virtual machine (VM) running on a host and the VM was configured to use 802.1Qbg, then libvirt was not informed and the VM connectivity was lost. Libvirt has support for restarting a VM, but it relies on the LLDP Agent Daemon to forward the Virtual Switch Interface (VSI) information. This update enables forwarding of the switch-originated VSI message to libvirt. All users of lldpad are advised to upgrade to this updated package, which fixes these bugs and adds this enhancement.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/lldpad
Appendix B. The cephadm commands
Appendix B. The cephadm commands The cephadm is a command line tool to manage the local host for the Cephadm Orchestrator. It provides commands to investigate and modify the state of the current host. Some of the commands are generally used for debugging. Note cephadm is not required on all hosts, however, it is useful when investigating a particular daemon. The cephadm-ansible-preflight playbook installs cephadm on all hosts and the cephadm-ansible purge playbook requires cephadm be installed on all hosts to work properly. adopt Description Convert an upgraded storage cluster daemon to run cephadm . Syntax Example ceph-volume Description This command is used to list all the devices on the particular host. Run the ceph-volume command inside a container Deploys OSDs with different device technologies like lvm or physical disks using pluggable tools and follows a predictable, and robust way of preparing, activating, and starting OSDs. Syntax Example check-host Description Check the host configuration that is suitable for a Ceph cluster. Syntax Example deploy Description Deploys a daemon on the local host. Syntax Example enter Description Run an interactive shell inside a running daemon container. Syntax Example help Description View all the commands supported by cephadm . Syntax Example install Description Install the packages. Syntax Example inspect-image Description Inspect the local Ceph container image. Syntax Example list-networks Description List the IP networks. Syntax Example ls Description List daemon instances known to cephadm on the hosts. You can use --no-detail for the command to run faster, which gives details of the daemon name, fsid, style, and systemd unit per daemon. You can use --legacy-dir option to specify a legacy base directory to search for daemons. Syntax Example logs Description Print journald logs for a daemon container. This is similar to the journalctl command. Syntax Example prepare-host Description Prepare a host for cephadm . Syntax Example pull Description Pull the Ceph image. Syntax Example registry-login Description Give cephadm login information for an authenticated registry. Cephadm attempts to log the calling host into that registry. Syntax Example You can also use a JSON registry file containing the login info formatted as: Syntax Example rm-daemon Description Remove a specific daemon instance. If you run the cephadm rm-daemon command on the host directly, although the command removes the daemon, the cephadm mgr module notices that the daemon is missing and redeploys it. This command is problematic and should be used only for experimental purposes and debugging. Syntax Example rm-cluster Description Remove all the daemons from a storage cluster on that specific host where it is run. Similar to rm-daemon , if you remove a few daemons this way and the Ceph Orchestrator is not paused and some of those daemons belong to services that are not unmanaged, the cephadm orchestrator just redeploys them there. Syntax Example rm-repo Description Remove a package repository configuration. This is mainly used for the disconnected installation of Red Hat Ceph Storage. Syntax Example run Description Run a Ceph daemon, in a container, in the foreground. Syntax Example shell Description Run an interactive shell with access to Ceph commands over the inferred or specified Ceph cluster. You can enter the shell using the cephadm shell command and run all the orchestrator commands within the shell. Syntax Example unit Description Start, stop, restart, enable, and disable the daemons with this operation. This operates on the daemon's systemd unit. Syntax Example version Description Provides the version of the storage cluster. Syntax Example
[ "cephadm adopt [-h] --name DAEMON_NAME --style STYLE [--cluster CLUSTER ] --legacy-dir [ LEGACY_DIR ] --config-json CONFIG_JSON ] [--skip-firewalld] [--skip-pull]", "cephadm adopt --style=legacy --name prometheus.host02", "cephadm ceph-volume inventory/simple/raw/lvm [-h] [--fsid FSID ] [--config-json CONFIG_JSON ] [--config CONFIG , -c CONFIG ] [--keyring KEYRING , -k KEYRING ]", "cephadm ceph-volume inventory --fsid f64f341c-655d-11eb-8778-fa163e914bcc", "cephadm check-host [--expect-hostname HOSTNAME ]", "cephadm check-host --expect-hostname host02", "cephadm shell deploy DAEMON_TYPE [-h] [--name DAEMON_NAME ] [--fsid FSID ] [--config CONFIG , -c CONFIG ] [--config-json CONFIG_JSON ] [--keyring KEYRING ] [--key KEY ] [--osd-fsid OSD_FSID ] [--skip-firewalld] [--tcp-ports TCP_PORTS ] [--reconfig] [--allow-ptrace] [--memory-request MEMORY_REQUEST ] [--memory-limit MEMORY_LIMIT ] [--meta-json META_JSON ]", "cephadm shell deploy mon --fsid f64f341c-655d-11eb-8778-fa163e914bcc", "cephadm enter [-h] [--fsid FSID ] --name NAME [command [command ...]]", "cephadm enter --name 52c611f2b1d9", "cephadm help", "cephadm help", "cephadm install PACKAGES", "cephadm install ceph-common ceph-osd", "cephadm --image IMAGE_ID inspect-image", "cephadm --image 13ea90216d0be03003d12d7869f72ad9de5cec9e54a27fd308e01e467c0d4a0a inspect-image", "cephadm list-networks", "cephadm list-networks", "cephadm ls [--no-detail] [--legacy-dir LEGACY_DIR ]", "cephadm ls --no-detail", "cephadm logs [--fsid FSID ] --name DAEMON_NAME cephadm logs [--fsid FSID ] --name DAEMON_NAME -- -n NUMBER # Last N lines cephadm logs [--fsid FSID ] --name DAEMON_NAME -- -f # Follow the logs", "cephadm logs --fsid 57bddb48-ee04-11eb-9962-001a4a000672 --name osd.8 cephadm logs --fsid 57bddb48-ee04-11eb-9962-001a4a000672 --name osd.8 -- -n 20 cephadm logs --fsid 57bddb48-ee04-11eb-9962-001a4a000672 --name osd.8 -- -f", "cephadm prepare-host [--expect-hostname HOSTNAME ]", "cephadm prepare-host cephadm prepare-host --expect-hostname host01", "cephadm [-h] [--image IMAGE_ID ] pull", "cephadm --image 13ea90216d0be03003d12d7869f72ad9de5cec9e54a27fd308e01e467c0d4a0a pull", "cephadm registry-login --registry-url [ REGISTRY_URL ] --registry-username [ USERNAME ] --registry-password [ PASSWORD ] [--fsid FSID ] [--registry-json JSON_FILE ]", "cephadm registry-login --registry-url registry.redhat.io --registry-username myuser1 --registry-password mypassword1", "cat REGISTRY_FILE { \"url\":\" REGISTRY_URL \", \"username\":\" REGISTRY_USERNAME \", \"password\":\" REGISTRY_PASSWORD \" }", "cat registry_file { \"url\":\"registry.redhat.io\", \"username\":\"myuser\", \"password\":\"mypass\" } cephadm registry-login -i registry_file", "cephadm rm-daemon [--fsid FSID ] [--name DAEMON_NAME ] [--force ] [--force-delete-data]", "cephadm rm-daemon --fsid f64f341c-655d-11eb-8778-fa163e914bcc --name osd.8", "cephadm rm-cluster [--fsid FSID ] [--force]", "cephadm rm-cluster --fsid f64f341c-655d-11eb-8778-fa163e914bcc", "cephadm rm-repo [-h]", "cephadm rm-repo", "cephadm run [--fsid FSID ] --name DAEMON_NAME", "cephadm run --fsid f64f341c-655d-11eb-8778-fa163e914bcc --name osd.8", "cephadm shell [--fsid FSID ] [--name DAEMON_NAME , -n DAEMON_NAME ] [--config CONFIG , -c CONFIG ] [--mount MOUNT , -m MOUNT ] [--keyring KEYRING , -k KEYRING ] [--env ENV , -e ENV ]", "cephadm shell -- ceph orch ls cephadm shell", "cephadm unit [--fsid FSID ] --name DAEMON_NAME start/stop/restart/enable/disable", "cephadm unit --fsid f64f341c-655d-11eb-8778-fa163e914bcc --name osd.8 start", "cephadm version", "cephadm version" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/installation_guide/the-cephadm-commands_install
9.9. Active Directory Authentication Using Kerberos (GSSAPI)
9.9. Active Directory Authentication Using Kerberos (GSSAPI) When using Red Hat JBoss Data Grid with Microsoft Active Directory, data security can be enabled via Kerberos authentication. To configure Kerberos authentication for Microsoft Active Directory, use the following procedure. Procedure 9.6. Configure Kerberos Authentication for Active Directory (Library Mode) Configure JBoss EAP server to authenticate itself to Kerberos. This can be done by configuring a dedicated security domain, for example: The security domain for authentication must be configured correctly for JBoss EAP, an application must have a valid Kerberos ticket. To initiate the Kerberos ticket, you must reference another security domain using . This points to the standard Kerberos login module described in Step 3. The security domain authentication configuration described in the step points to the following standard Kerberos login module: Report a bug
[ "<security-domain name=\"ldap-service\" cache-type=\"default\"> <authentication> <login-module code=\"Kerberos\" flag=\"required\"> <module-option name=\"storeKey\" value=\"true\"/> <module-option name=\"useKeyTab\" value=\"true\"/> <module-option name=\"refreshKrb5Config\" value=\"true\"/> <module-option name=\"principal\" value=\"ldap/[email protected]\"/> <module-option name=\"keyTab\" value=\"USD{basedir}/keytab/ldap.keytab\"/> <module-option name=\"doNotPrompt\" value=\"true\"/> </login-module> </authentication> </security-domain>", "<module-option name=\"usernamePasswordDomain\" value=\"krb-admin\"/>", "<security-domain name=\"ispn-admin\" cache-type=\"default\"> <authentication> <login-module code=\"SPNEGO\" flag=\"requisite\"> <module-option name=\"password-stacking\" value=\"useFirstPass\"/> <module-option name=\"serverSecurityDomain\" value=\"ldap-service\"/> <module-option name=\"usernamePasswordDomain\" value=\"krb-admin\"/> </login-module> <login-module code=\"AdvancedAdLdap\" flag=\"required\"> <module-option name=\"password-stacking\" value=\"useFirstPass\"/> <module-option name=\"bindAuthentication\" value=\"GSSAPI\"/> <module-option name=\"jaasSecurityDomain\" value=\"ldap-service\"/> <module-option name=\"java.naming.provider.url\" value=\"ldap://localhost:389\"/> <module-option name=\"baseCtxDN\" value=\"ou=People,dc=infinispan,dc=org\"/> <module-option name=\"baseFilter\" value=\"(krb5PrincipalName={0})\"/> <module-option name=\"rolesCtxDN\" value=\"ou=Roles,dc=infinispan,dc=org\"/> <module-option name=\"roleFilter\" value=\"(member={1})\"/> <module-option name=\"roleAttributeID\" value=\"cn\"/> </login-module> </authentication> </security-domain>", "<security-domain name=\"krb-admin\" cache-type=\"default\"> <authentication> <login-module code=\"Kerberos\" flag=\"required\"> <module-option name=\"useKeyTab\" value=\"true\"/> <module-option name=\"principal\" value=\"[email protected]\"/> <module-option name=\"keyTab\" value=\"USD{basedir}/keytab/admin.keytab\"/> </login-module> </authentication> </security-domain>" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/developer_guide/active_directory_authentication_using_kerberos_gssapi
Chapter 2. Installing OpenShift on a single node
Chapter 2. Installing OpenShift on a single node You can install single-node OpenShift by using either the web-based Assisted Installer or the coreos-installer tool to generate a discovery ISO image. The discovery ISO image writes the Red Hat Enterprise Linux CoreOS (RHCOS) system configuration to the target installation disk, so that you can run a single-cluster node to meet your needs. Consider using single-node OpenShift when you want to run a cluster in a low-resource or an isolated environment for testing, troubleshooting, training, or small-scale project purposes. 2.1. Installing single-node OpenShift using the Assisted Installer To install OpenShift Container Platform on a single node, use the web-based Assisted Installer wizard to guide you through the process and manage the installation. See the Assisted Installer for OpenShift Container Platform documentation for details and configuration options. 2.1.1. Generating the discovery ISO with the Assisted Installer Installing OpenShift Container Platform on a single node requires a discovery ISO, which the Assisted Installer can generate. Procedure On the administration host, open a browser and navigate to Red Hat OpenShift Cluster Manager . Click Create New Cluster to create a new cluster. In the Cluster name field, enter a name for the cluster. In the Base domain field, enter a base domain. For example: All DNS records must be subdomains of this base domain and include the cluster name, for example: Note You cannot change the base domain or cluster name after cluster installation. Select Install single node OpenShift (SNO) and complete the rest of the wizard steps. Download the discovery ISO. Complete the remaining Assisted Installer wizard steps. Important Ensure that you take note of the discovery ISO URL for installing with virtual media. If you enable OpenShift Virtualization during this process, you must have a second local storage device of at least 50GiB for your virtual machines. Additional resources Persistent storage using logical volume manager storage What you can do with OpenShift Virtualization 2.1.2. Installing single-node OpenShift with the Assisted Installer Use the Assisted Installer to install the single-node cluster. Prerequisites Ensure that the boot drive order in the server BIOS settings defaults to booting the server from the target installation disk. Procedure Attach the discovery ISO image to the target host. Boot the server from the discovery ISO image. The discovery ISO image writes the system configuration to the target installation disk and automatically triggers a server restart. On the administration host, return to the browser. Wait for the host to appear in the list of discovered hosts. If necessary, reload the Assisted Clusters page and select the cluster name. Complete the install wizard steps. Add networking details, including a subnet from the available subnets. Add the SSH public key if necessary. Monitor the installation's progress. Watch the cluster events. After the installation process finishes writing the operating system image to the server's hard disk, the server restarts. Optional: Remove the discovery ISO image. The server restarts several times automatically, deploying the control plane. Additional resources Creating a bootable ISO image on a USB drive Booting from an HTTP-hosted ISO image using the Redfish API Adding worker nodes to single-node OpenShift clusters 2.2. Installing single-node OpenShift manually To install OpenShift Container Platform on a single node, first generate the installation ISO, and then boot the server from the ISO. You can monitor the installation using the openshift-install installation program. Additional resources Networking requirements for user-provisioned infrastructure User-provisioned DNS requirements Configuring DHCP or static IP addresses 2.2.1. Generating the installation ISO with coreos-installer Installing OpenShift Container Platform on a single node requires an installation ISO, which you can generate with the following procedure. Prerequisites Install podman . Note See "Requirements for installing OpenShift on a single node" for networking requirements, including DNS records. Procedure Set the OpenShift Container Platform version: USD export OCP_VERSION=<ocp_version> 1 1 Replace <ocp_version> with the current version, for example, latest-4.18 Set the host architecture: USD export ARCH=<architecture> 1 1 Replace <architecture> with the target host architecture, for example, aarch64 or x86_64 . Download the OpenShift Container Platform client ( oc ) and make it available for use by entering the following commands: USD curl -k https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDOCP_VERSION/openshift-client-linux.tar.gz -o oc.tar.gz USD tar zxf oc.tar.gz USD chmod +x oc Download the OpenShift Container Platform installer and make it available for use by entering the following commands: USD curl -k https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDOCP_VERSION/openshift-install-linux.tar.gz -o openshift-install-linux.tar.gz USD tar zxvf openshift-install-linux.tar.gz USD chmod +x openshift-install Retrieve the RHCOS ISO URL by running the following command: USD export ISO_URL=USD(./openshift-install coreos print-stream-json | grep location | grep USDARCH | grep iso | cut -d\" -f4) Download the RHCOS ISO: USD curl -L USDISO_URL -o rhcos-live.iso Prepare the install-config.yaml file: apiVersion: v1 baseDomain: <domain> 1 compute: - name: worker replicas: 0 2 controlPlane: name: master replicas: 1 3 metadata: name: <name> 4 networking: 5 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 6 networkType: OVNKubernetes serviceNetwork: - 172.30.0.0/16 platform: none: {} bootstrapInPlace: installationDisk: /dev/disk/by-id/<disk_id> 7 pullSecret: '<pull_secret>' 8 sshKey: | <ssh_key> 9 1 Add the cluster domain name. 2 Set the compute replicas to 0 . This makes the control plane node schedulable. 3 Set the controlPlane replicas to 1 . In conjunction with the compute setting, this setting ensures the cluster runs on a single node. 4 Set the metadata name to the cluster name. 5 Set the networking details. OVN-Kubernetes is the only allowed network plugin type for single-node clusters. 6 Set the cidr value to match the subnet of the single-node OpenShift cluster. 7 Set the path to the installation disk drive, for example, /dev/disk/by-id/wwn-0x64cd98f04fde100024684cf3034da5c2 . 8 Copy the pull secret from Red Hat OpenShift Cluster Manager and add the contents to this configuration setting. 9 Add the public SSH key from the administration host so that you can log in to the cluster after installation. Generate OpenShift Container Platform assets by running the following commands: USD mkdir ocp USD cp install-config.yaml ocp USD ./openshift-install --dir=ocp create single-node-ignition-config Embed the ignition data into the RHCOS ISO by running the following commands: USD alias coreos-installer='podman run --privileged --pull always --rm \ -v /dev:/dev -v /run/udev:/run/udev -v USDPWD:/data \ -w /data quay.io/coreos/coreos-installer:release' USD coreos-installer iso ignition embed -fi ocp/bootstrap-in-place-for-live-iso.ign rhcos-live.iso Additional resources See Requirements for installing OpenShift on a single node for more information about installing OpenShift Container Platform on a single node. See Cluster capabilities for more information about enabling cluster capabilities that were disabled before installation. See Optional cluster capabilities in OpenShift Container Platform 4.18 for more information about the features provided by each capability. 2.2.2. Monitoring the cluster installation using openshift-install Use openshift-install to monitor the progress of the single-node cluster installation. Prerequisites Ensure that the boot drive order in the server BIOS settings defaults to booting the server from the target installation disk. Procedure Attach the discovery ISO image to the target host. Boot the server from the discovery ISO image. The discovery ISO image writes the system configuration to the target installation disk and automatically triggers a server restart. On the administration host, monitor the installation by running the following command: USD ./openshift-install --dir=ocp wait-for install-complete Optional: Remove the discovery ISO image. The server restarts several times while deploying the control plane. Verification After the installation is complete, check the environment by running the following command: USD export KUBECONFIG=ocp/auth/kubeconfig USD oc get nodes Example output NAME STATUS ROLES AGE VERSION control-plane.example.com Ready master,worker 10m v1.31.3 Additional resources Creating a bootable ISO image on a USB drive Booting from an HTTP-hosted ISO image using the Redfish API Adding worker nodes to single-node OpenShift clusters 2.3. Installing single-node OpenShift on cloud providers 2.3.1. Additional requirements for installing single-node OpenShift on a cloud provider The documentation for installer-provisioned installation on cloud providers is based on a high availability cluster consisting of three control plane nodes. When referring to the documentation, consider the differences between the requirements for a single-node OpenShift cluster and a high availability cluster. A high availability cluster requires a temporary bootstrap machine, three control plane machines, and at least two compute machines. For a single-node OpenShift cluster, you need only a temporary bootstrap machine and one cloud instance for the control plane node and no compute nodes. The minimum resource requirements for high availability cluster installation include a control plane node with 4 vCPUs and 100GB of storage. For a single-node OpenShift cluster, you must have a minimum of 8 vCPUs and 120GB of storage. The controlPlane.replicas setting in the install-config.yaml file should be set to 1 . The compute.replicas setting in the install-config.yaml file should be set to 0 . This makes the control plane node schedulable. 2.3.2. Supported cloud providers for single-node OpenShift The following table contains a list of supported cloud providers and CPU architectures. Table 2.1. Supported cloud providers Cloud provider CPU architecture Amazon Web Service (AWS) x86_64 and AArch64 Microsoft Azure x86_64 Google Cloud Platform (GCP) x86_64 and AArch64 2.3.3. Installing single-node OpenShift on AWS Installing a single-node cluster on AWS requires installer-provisioned installation using the "Installing a cluster on AWS with customizations" procedure. Additional resources Installing a cluster on AWS with customizations 2.3.4. Installing single-node OpenShift on Azure Installing a single node cluster on Azure requires installer-provisioned installation using the "Installing a cluster on Azure with customizations" procedure. Additional resources Installing a cluster on Azure with customizations 2.3.5. Installing single-node OpenShift on GCP Installing a single node cluster on GCP requires installer-provisioned installation using the "Installing a cluster on GCP with customizations" procedure. Additional resources Installing a cluster on GCP with customizations 2.4. Creating a bootable ISO image on a USB drive You can install software using a bootable USB drive that contains an ISO image. Booting the server with the USB drive prepares the server for the software installation. Procedure On the administration host, insert a USB drive into a USB port. Create a bootable USB drive, for example: # dd if=<path_to_iso> of=<path_to_usb> status=progress where: <path_to_iso> is the relative path to the downloaded ISO file, for example, rhcos-live.iso . <path_to_usb> is the location of the connected USB drive, for example, /dev/sdb . After the ISO is copied to the USB drive, you can use the USB drive to install software on the server. 2.5. Booting from an HTTP-hosted ISO image using the Redfish API You can provision hosts in your network using ISOs that you install using the Redfish Baseboard Management Controller (BMC) API. Note This example procedure demonstrates the steps on a Dell server. Important Ensure that you have the latest firmware version of iDRAC that is compatible with your hardware. If you have any issues with the hardware or firmware, you must contact the provider. Prerequisites Download the installation Red Hat Enterprise Linux CoreOS (RHCOS) ISO. Use a Dell PowerEdge server that is compatible with iDRAC9. Procedure Copy the ISO file to an HTTP server accessible in your network. Boot the host from the hosted ISO file, for example: Call the Redfish API to set the hosted ISO as the VirtualMedia boot media by running the following command: USD curl -k -u <bmc_username>:<bmc_password> -d '{"Image":"<hosted_iso_file>", "Inserted": true}' -H "Content-Type: application/json" -X POST <host_bmc_address>/redfish/v1/Managers/iDRAC.Embedded.1/VirtualMedia/CD/Actions/VirtualMedia.InsertMedia Where: <bmc_username>:<bmc_password> Is the username and password for the target host BMC. <hosted_iso_file> Is the URL for the hosted installation ISO, for example: http://webserver.example.com/rhcos-live-minimal.iso . The ISO must be accessible from the target host machine. <host_bmc_address> Is the BMC IP address of the target host machine. Set the host to boot from the VirtualMedia device by running the following command: USD curl -k -u <bmc_username>:<bmc_password> -X PATCH -H 'Content-Type: application/json' -d '{"Boot": {"BootSourceOverrideTarget": "Cd", "BootSourceOverrideMode": "UEFI", "BootSourceOverrideEnabled": "Once"}}' <host_bmc_address>/redfish/v1/Systems/System.Embedded.1 Reboot the host: USD curl -k -u <bmc_username>:<bmc_password> -d '{"ResetType": "ForceRestart"}' -H 'Content-type: application/json' -X POST <host_bmc_address>/redfish/v1/Systems/System.Embedded.1/Actions/ComputerSystem.Reset Optional: If the host is powered off, you can boot it using the {"ResetType": "On"} switch. Run the following command: USD curl -k -u <bmc_username>:<bmc_password> -d '{"ResetType": "On"}' -H 'Content-type: application/json' -X POST <host_bmc_address>/redfish/v1/Systems/System.Embedded.1/Actions/ComputerSystem.Reset 2.6. Creating a custom live RHCOS ISO for remote server access In some cases, you cannot attach an external disk drive to a server, however, you need to access the server remotely to provision a node. It is recommended to enable SSH access to the server. You can create a live RHCOS ISO with SSHd enabled and with predefined credentials so that you can access the server after it boots. Prerequisites You installed the butane utility. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Download the latest live RHCOS ISO from mirror.openshift.com . Create the embedded.yaml file that the butane utility uses to create the Ignition file: variant: openshift version: 4.18.0 metadata: name: sshd labels: machineconfiguration.openshift.io/role: worker passwd: users: - name: core 1 ssh_authorized_keys: - '<ssh_key>' 1 The core user has sudo privileges. Run the butane utility to create the Ignition file using the following command: USD butane -pr embedded.yaml -o embedded.ign After the Ignition file is created, you can include the configuration in a new live RHCOS ISO, which is named rhcos-sshd-4.18.0-x86_64-live.x86_64.iso , with the coreos-installer utility: USD coreos-installer iso ignition embed -i embedded.ign rhcos-4.18.0-x86_64-live.x86_64.iso -o rhcos-sshd-4.18.0-x86_64-live.x86_64.iso Verification Check that the custom live ISO can be used to boot the server by running the following command: # coreos-installer iso ignition show rhcos-sshd-4.18.0-x86_64-live.x86_64.iso Example output { "ignition": { "version": "3.2.0" }, "passwd": { "users": [ { "name": "core", "sshAuthorizedKeys": [ "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCZnG8AIzlDAhpyENpK2qKiTT8EbRWOrz7NXjRzopbPu215mocaJgjjwJjh1cYhgPhpAp6M/ttTk7I4OI7g4588Apx4bwJep6oWTU35LkY8ZxkGVPAJL8kVlTdKQviDv3XX12l4QfnDom4tm4gVbRH0gNT1wzhnLP+LKYm2Ohr9D7p9NBnAdro6k++XWgkDeijLRUTwdEyWunIdW1f8G0Mg8Y1Xzr13BUo3+8aey7HLKJMDtobkz/C8ESYA/f7HJc5FxF0XbapWWovSSDJrr9OmlL9f4TfE+cQk3s+eoKiz2bgNPRgEEwihVbGsCN4grA+RzLCAOpec+2dTJrQvFqsD [email protected]" ] } ] } } 2.7. Installing single-node OpenShift with IBM Z and IBM LinuxONE Installing a single-node cluster on IBM Z(R) and IBM(R) LinuxONE requires user-provisioned installation using one of the following procedures: Installing a cluster with z/VM on IBM Z(R) and IBM(R) LinuxONE Installing a cluster with RHEL KVM on IBM Z(R) and IBM(R) LinuxONE Installing a cluster in an LPAR on IBM Z(R) and IBM(R) LinuxONE Note Installing a single-node cluster on IBM Z(R) simplifies installation for development and test environments and requires less resource requirements at entry level. Hardware requirements The equivalent of two Integrated Facilities for Linux (IFL), which are SMT2 enabled, for each cluster. At least one network connection to both connect to the LoadBalancer service and to serve data for traffic outside the cluster. Note You can use dedicated or shared IFLs to assign sufficient compute resources. Resource sharing is one of the key strengths of IBM Z(R). However, you must adjust capacity correctly on each hypervisor layer and ensure sufficient resources for every OpenShift Container Platform cluster. 2.7.1. Installing single-node OpenShift with z/VM on IBM Z and IBM LinuxONE Prerequisites You have installed podman . Procedure Set the OpenShift Container Platform version by running the following command: USD OCP_VERSION=<ocp_version> 1 1 Replace <ocp_version> with the current version. For example, latest-4.18 . Set the host architecture by running the following command: USD ARCH=<architecture> 1 1 Replace <architecture> with the target host architecture s390x . Download the OpenShift Container Platform client ( oc ) and make it available for use by entering the following commands: USD curl -k https://mirror.openshift.com/pub/openshift-v4/USD{ARCH}/clients/ocp/USD{OCP_VERSION}/openshift-client-linux.tar.gz -o oc.tar.gz USD tar zxf oc.tar.gz USD chmod +x oc Download the OpenShift Container Platform installer and make it available for use by entering the following commands: USD curl -k https://mirror.openshift.com/pub/openshift-v4/USD{ARCH}/clients/ocp/USD{OCP_VERSION}/openshift-install-linux.tar.gz -o openshift-install-linux.tar.gz USD tar zxvf openshift-install-linux.tar.gz USD chmod +x openshift-install Prepare the install-config.yaml file: apiVersion: v1 baseDomain: <domain> 1 compute: - name: worker replicas: 0 2 controlPlane: name: master replicas: 1 3 metadata: name: <name> 4 networking: 5 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 6 networkType: OVNKubernetes serviceNetwork: - 172.30.0.0/16 platform: none: {} bootstrapInPlace: installationDisk: /dev/disk/by-id/<disk_id> 7 pullSecret: '<pull_secret>' 8 sshKey: | <ssh_key> 9 1 Add the cluster domain name. 2 Set the compute replicas to 0 . This makes the control plane node schedulable. 3 Set the controlPlane replicas to 1 . In conjunction with the compute setting, this setting ensures the cluster runs on a single node. 4 Set the metadata name to the cluster name. 5 Set the networking details. OVN-Kubernetes is the only allowed network plugin type for single-node clusters. 6 Set the cidr value to match the subnet of the single-node OpenShift cluster. 7 Set the path to the installation disk drive, for example, /dev/disk/by-id/wwn-0x64cd98f04fde100024684cf3034da5c2 . 8 Copy the pull secret from Red Hat OpenShift Cluster Manager and add the contents to this configuration setting. 9 Add the public SSH key from the administration host so that you can log in to the cluster after installation. Generate OpenShift Container Platform assets by running the following commands: USD mkdir ocp USD cp install-config.yaml ocp USD ./openshift-install --dir=ocp create single-node-ignition-config Obtain the RHEL kernel , initramfs , and rootfs artifacts from the Product Downloads page on the Red Hat Customer Portal or from the RHCOS image mirror page. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Only use the appropriate kernel , initramfs , and rootfs artifacts described in the following procedure. The file names contain the OpenShift Container Platform version number. They resemble the following examples: kernel rhcos-<version>-live-kernel-<architecture> initramfs rhcos-<version>-live-initramfs.<architecture>.img rootfs rhcos-<version>-live-rootfs.<architecture>.img Note The rootfs image is the same for FCP and DASD. Move the following artifacts and files to an HTTP or HTTPS server: Downloaded RHEL live kernel , initramfs , and rootfs artifacts Ignition files Create parameter files for a particular virtual machine: Example parameter file cio_ignore=all,!condev rd.neednet=1 \ console=ttysclp0 \ ignition.firstboot ignition.platform.id=metal \ ignition.config.url=http://<http_server>:8080/ignition/bootstrap-in-place-for-live-iso.ign \ 1 coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \ 2 ip=<ip>::<gateway>:<mask>:<hostname>::none nameserver=<dns> \ 3 rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 \ rd.dasd=0.0.4411 \ 4 rd.zfcp=0.0.8001,0x50050763040051e3,0x4000406300000000 \ 5 zfcp.allow_lun_scan=0 1 For the ignition.config.url= parameter, specify the Ignition file for the machine role. Only HTTP and HTTPS protocols are supported. 2 For the coreos.live.rootfs_url= artifact, specify the matching rootfs artifact for the kernel`and `initramfs you are booting. Only HTTP and HTTPS protocols are supported. 3 For the ip= parameter, assign the IP address automatically using DHCP or manually as described in "Installing a cluster with z/VM on IBM Z(R) and IBM(R) LinuxONE". 4 For installations on DASD-type disks, use rd.dasd= to specify the DASD where RHCOS is to be installed. Omit this entry for FCP-type disks. 5 For installations on FCP-type disks, use rd.zfcp=<adapter>,<wwpn>,<lun> to specify the FCP disk where RHCOS is to be installed. Omit this entry for DASD-type disks. Leave all other parameters unchanged. Transfer the following artifacts, files, and images to z/VM. For example by using FTP: kernel and initramfs artifacts Parameter files RHCOS images For details about how to transfer the files with FTP and boot from the virtual reader, see Installing under Z/VM . Punch the files to the virtual reader of the z/VM guest virtual machine that is to become your bootstrap node. Log in to CMS on the bootstrap machine. IPL the bootstrap machine from the reader by running the following command: After the first reboot of the virtual machine, run the following commands directly after one another: To boot a DASD device after first reboot, run the following commands: USD cp i <devno> clear loadparm prompt where: <devno> Specifies the device number of the boot device as seen by the guest. USD cp vi vmsg 0 <kernel_parameters> where: <kernel_parameters> Specifies a set of kernel parameters to be stored as system control program data (SCPDATA). When booting Linux, these kernel parameters are concatenated to the end of the existing kernel parameters that are used by your boot configuration. The combined parameter string must not exceed 896 characters. To boot an FCP device after first reboot, run the following commands: USD cp set loaddev portname <wwpn> lun <lun> where: <wwpn> Specifies the target port and <lun> the logical unit in hexadecimal format. USD cp set loaddev bootprog <n> where: <n> Specifies the kernel to be booted. USD cp set loaddev scpdata {APPEND|NEW} '<kernel_parameters>' where: <kernel_parameters> Specifies a set of kernel parameters to be stored as system control program data (SCPDATA). When booting Linux, these kernel parameters are concatenated to the end of the existing kernel parameters that are used by your boot configuration. The combined parameter string must not exceed 896 characters. <APPEND|NEW> Optional: Specify APPEND to append kernel parameters to existing SCPDATA. This is the default. Specify NEW to replace existing SCPDATA. Example USD cp set loaddev scpdata 'rd.zfcp=0.0.8001,0x500507630a0350a4,0x4000409D00000000 ip=encbdd0:dhcp::02:00:00:02:34:02 rd.neednet=1' To start the IPL and boot process, run the following command: USD cp i <devno> where: <devno> Specifies the device number of the boot device as seen by the guest. 2.7.2. Installing single-node OpenShift with RHEL KVM on IBM Z and IBM LinuxONE Prerequisites You have installed podman . Procedure Set the OpenShift Container Platform version by running the following command: USD OCP_VERSION=<ocp_version> 1 1 Replace <ocp_version> with the current version. For example, latest-4.18 . Set the host architecture by running the following command: USD ARCH=<architecture> 1 1 Replace <architecture> with the target host architecture s390x . Download the OpenShift Container Platform client ( oc ) and make it available for use by entering the following commands: USD curl -k https://mirror.openshift.com/pub/openshift-v4/USD{ARCH}/clients/ocp/USD{OCP_VERSION}/openshift-client-linux.tar.gz -o oc.tar.gz USD tar zxf oc.tar.gz USD chmod +x oc Download the OpenShift Container Platform installer and make it available for use by entering the following commands: USD curl -k https://mirror.openshift.com/pub/openshift-v4/USD{ARCH}/clients/ocp/USD{OCP_VERSION}/openshift-install-linux.tar.gz -o openshift-install-linux.tar.gz USD tar zxvf openshift-install-linux.tar.gz USD chmod +x openshift-install Prepare the install-config.yaml file: apiVersion: v1 baseDomain: <domain> 1 compute: - name: worker replicas: 0 2 controlPlane: name: master replicas: 1 3 metadata: name: <name> 4 networking: 5 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 6 networkType: OVNKubernetes serviceNetwork: - 172.30.0.0/16 platform: none: {} bootstrapInPlace: installationDisk: /dev/disk/by-id/<disk_id> 7 pullSecret: '<pull_secret>' 8 sshKey: | <ssh_key> 9 1 Add the cluster domain name. 2 Set the compute replicas to 0 . This makes the control plane node schedulable. 3 Set the controlPlane replicas to 1 . In conjunction with the compute setting, this setting ensures the cluster runs on a single node. 4 Set the metadata name to the cluster name. 5 Set the networking details. OVN-Kubernetes is the only allowed network plugin type for single-node clusters. 6 Set the cidr value to match the subnet of the single-node OpenShift cluster. 7 Set the path to the installation disk drive, for example, /dev/disk/by-id/wwn-0x64cd98f04fde100024684cf3034da5c2 . 8 Copy the pull secret from Red Hat OpenShift Cluster Manager and add the contents to this configuration setting. 9 Add the public SSH key from the administration host so that you can log in to the cluster after installation. Generate OpenShift Container Platform assets by running the following commands: USD mkdir ocp USD cp install-config.yaml ocp USD ./openshift-install --dir=ocp create single-node-ignition-config Obtain the RHEL kernel , initramfs , and rootfs artifacts from the Product Downloads page on the Red Hat Customer Portal or from the RHCOS image mirror page. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Only use the appropriate kernel , initramfs , and rootfs artifacts described in the following procedure. The file names contain the OpenShift Container Platform version number. They resemble the following examples: kernel rhcos-<version>-live-kernel-<architecture> initramfs rhcos-<version>-live-initramfs.<architecture>.img rootfs rhcos-<version>-live-rootfs.<architecture>.img Before you launch virt-install , move the following files and artifacts to an HTTP or HTTPS server: Downloaded RHEL live kernel , initramfs , and rootfs artifacts Ignition files Create the KVM guest nodes by using the following components: RHEL kernel and initramfs artifacts Ignition files The new disk image Adjusted parm line arguments USD virt-install \ --name <vm_name> \ --autostart \ --memory=<memory_mb> \ --cpu host \ --vcpus <vcpus> \ --location <media_location>,kernel=<rhcos_kernel>,initrd=<rhcos_initrd> \ 1 --disk size=100 \ --network network=<virt_network_parm> \ --graphics none \ --noautoconsole \ --extra-args "rd.neednet=1 ignition.platform.id=metal ignition.firstboot" \ --extra-args "ignition.config.url=http://<http_server>/bootstrap.ign" \ 2 --extra-args "coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img" \ 3 --extra-args "ip=<ip>::<gateway>:<mask>:<hostname>::none" \ 4 --extra-args "nameserver=<dns>" \ --extra-args "console=ttysclp0" \ --wait 1 For the --location parameter, specify the location of the kernel/initrd on the HTTP or HTTPS server. 2 Specify the location of the bootstrap.ign config file. Only HTTP and HTTPS protocols are supported. 3 For the coreos.live.rootfs_url= artifact, specify the matching rootfs artifact for the kernel and initramfs you are booting. Only HTTP and HTTPS protocols are supported. 4 For the ip= parameter, assign the IP address manually as described in "Installing a cluster with RHEL KVM on IBM Z(R) and IBM(R) LinuxONE". 2.7.3. Installing single-node OpenShift in an LPAR on IBM Z and IBM LinuxONE Prerequisites If you are deploying a single-node cluster there are zero compute nodes, the Ingress Controller pods run on the control plane nodes. In single-node cluster deployments, you must configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. See the Load balancing requirements for user-provisioned infrastructure section for more information. Procedure Set the OpenShift Container Platform version by running the following command: USD OCP_VERSION=<ocp_version> 1 1 Replace <ocp_version> with the current version. For example, latest-4.18 . Set the host architecture by running the following command: USD ARCH=<architecture> 1 1 Replace <architecture> with the target host architecture s390x . Download the OpenShift Container Platform client ( oc ) and make it available for use by entering the following commands: USD curl -k https://mirror.openshift.com/pub/openshift-v4/USD{ARCH}/clients/ocp/USD{OCP_VERSION}/openshift-client-linux.tar.gz -o oc.tar.gz USD tar zxvf oc.tar.gz USD chmod +x oc Download the OpenShift Container Platform installer and make it available for use by entering the following commands: USD curl -k https://mirror.openshift.com/pub/openshift-v4/USD{ARCH}/clients/ocp/USD{OCP_VERSION}/openshift-install-linux.tar.gz -o openshift-install-linux.tar.gz USD tar zxvf openshift-install-linux.tar.gz USD chmod +x openshift-install Prepare the install-config.yaml file: apiVersion: v1 baseDomain: <domain> 1 compute: - name: worker replicas: 0 2 controlPlane: name: master replicas: 1 3 metadata: name: <name> 4 networking: 5 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 6 networkType: OVNKubernetes serviceNetwork: - 172.30.0.0/16 platform: none: {} pullSecret: '<pull_secret>' 7 sshKey: | <ssh_key> 8 1 Add the cluster domain name. 2 Set the compute replicas to 0 . This makes the control plane node schedulable. 3 Set the controlPlane replicas to 1 . In conjunction with the compute setting, this setting ensures the cluster runs on a single node. 4 Set the metadata name to the cluster name. 5 Set the networking details. OVN-Kubernetes is the only allowed network plugin type for single-node clusters. 6 Set the cidr value to match the subnet of the single-node OpenShift cluster. 7 Copy the pull secret from Red Hat OpenShift Cluster Manager and add the contents to this configuration setting. 8 Add the public SSH key from the administration host so that you can log in to the cluster after installation. Generate OpenShift Container Platform assets by running the following commands: USD mkdir ocp USD cp install-config.yaml ocp Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to true . Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to true as shown in the following spec stanza: spec: mastersSchedulable: true status: {} Save and exit the file. Create the Ignition configuration files by running the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Obtain the RHEL kernel , initramfs , and rootfs artifacts from the Product Downloads page on the Red Hat Customer Portal or from the RHCOS image mirror page. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Only use the appropriate kernel , initramfs , and rootfs artifacts described in the following procedure. The file names contain the OpenShift Container Platform version number. They resemble the following examples: kernel rhcos-<version>-live-kernel-<architecture> initramfs rhcos-<version>-live-initramfs.<architecture>.img rootfs rhcos-<version>-live-rootfs.<architecture>.img Note The rootfs image is the same for FCP and DASD. Move the following artifacts and files to an HTTP or HTTPS server: Downloaded RHEL live kernel , initramfs , and rootfs artifacts Ignition files Create a parameter file for the bootstrap in an LPAR: Example parameter file for the bootstrap machine cio_ignore=all,!condev rd.neednet=1 \ console=ttysclp0 \ coreos.inst.install_dev=/dev/<block_device> \ 1 coreos.inst.ignition_url=http://<http_server>/bootstrap.ign \ 2 coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \ 3 ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> \ 4 rd.znet=qeth,0.0.1140,0.0.1141,0.0.1142,layer2=1,portno=0 \ rd.dasd=0.0.4411 \ 5 rd.zfcp=0.0.8001,0x50050763040051e3,0x4000406300000000 \ 6 zfcp.allow_lun_scan=0 1 Specify the block device on the system to install to. For installations on DASD-type disk use dasda , for installations on FCP-type disks use sda . 2 Specify the location of the bootstrap.ign config file. Only HTTP and HTTPS protocols are supported. 3 For the coreos.live.rootfs_url= artifact, specify the matching rootfs artifact for the kernel`and `initramfs you are booting. Only HTTP and HTTPS protocols are supported. 4 For the ip= parameter, assign the IP address manually as described in "Installing a cluster in an LPAR on IBM Z(R) and IBM(R) LinuxONE". 5 For installations on DASD-type disks, use rd.dasd= to specify the DASD where RHCOS is to be installed. Omit this entry for FCP-type disks. 6 For installations on FCP-type disks, use rd.zfcp=<adapter>,<wwpn>,<lun> to specify the FCP disk where RHCOS is to be installed. Omit this entry for DASD-type disks. You can adjust further parameters if required. Create a parameter file for the control plane in an LPAR: Example parameter file for the control plane machine cio_ignore=all,!condev rd.neednet=1 \ console=ttysclp0 \ coreos.inst.install_dev=/dev/<block_device> \ coreos.inst.ignition_url=http://<http_server>/master.ign \ 1 coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \ ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> \ rd.znet=qeth,0.0.1140,0.0.1141,0.0.1142,layer2=1,portno=0 \ rd.dasd=0.0.4411 \ rd.zfcp=0.0.8001,0x50050763040051e3,0x4000406300000000 \ zfcp.allow_lun_scan=0 1 Specify the location of the master.ign config file. Only HTTP and HTTPS protocols are supported. Transfer the following artifacts, files, and images to the LPAR. For example by using FTP: kernel and initramfs artifacts Parameter files RHCOS images For details about how to transfer the files with FTP and boot, see Installing in an LPAR . Boot the bootstrap machine. Boot the control plane machine. 2.8. Installing single-node OpenShift with IBM Power Installing a single-node cluster on IBM Power(R) requires user-provisioned installation using the "Installing a cluster with IBM Power(R)" procedure. Note Installing a single-node cluster on IBM Power(R) simplifies installation for development and test environments and requires less resource requirements at entry level. Hardware requirements The equivalent of two Integrated Facilities for Linux (IFL), which are SMT2 enabled, for each cluster. At least one network connection to connect to the LoadBalancer service and to serve data for traffic outside of the cluster. Note You can use dedicated or shared IFLs to assign sufficient compute resources. Resource sharing is one of the key strengths of IBM Power(R). However, you must adjust capacity correctly on each hypervisor layer and ensure sufficient resources for every OpenShift Container Platform cluster. Additional resources Installing a cluster on IBM Power(R) 2.8.1. Setting up basion for single-node OpenShift with IBM Power Prior to installing single-node OpenShift on IBM Power(R), you must set up bastion. Setting up a bastion server for single-node OpenShift on IBM Power(R) requires the configuration of the following services: PXE is used for the single-node OpenShift cluster installation. PXE requires the following services to be configured and run: DNS to define api, api-int, and *.apps DHCP service to enable PXE and assign an IP address to single-node OpenShift node HTTP to provide ignition and RHCOS rootfs image TFTP to enable PXE You must install dnsmasq to support DNS, DHCP and PXE, httpd for HTTP. Use the following procedure to configure a bastion server that meets these requirements. Procedure Use the following command to install grub2 , which is required to enable PXE for PowerVM: grub2-mknetdir --net-directory=/var/lib/tftpboot Example /var/lib/tftpboot/boot/grub2/grub.cfg file default=0 fallback=1 timeout=1 if [ USD{net_default_mac} == fa:b0:45:27:43:20 ]; then menuentry "CoreOS (BIOS)" { echo "Loading kernel" linux "/rhcos/kernel" ip=dhcp rd.neednet=1 ignition.platform.id=metal ignition.firstboot coreos.live.rootfs_url=http://192.168.10.5:8000/install/rootfs.img ignition.config.url=http://192.168.10.5:8000/ignition/sno.ign echo "Loading initrd" initrd "/rhcos/initramfs.img" } fi Use the following commands to download RHCOS image files from the mirror repo for PXE. Enter the following command to assign the RHCOS_URL variable the follow 4.12 URL: USD export RHCOS_URL=https://mirror.openshift.com/pub/openshift-v4/ppc64le/dependencies/rhcos/4.12/latest/ Enter the following command to navigate to the /var/lib/tftpboot/rhcos directory: USD cd /var/lib/tftpboot/rhcos Enter the following command to download the specified RHCOS kernel file from the URL stored in the RHCOS_URL variable: USD wget USD{RHCOS_URL}/rhcos-live-kernel-ppc64le -o kernel Enter the following command to download the RHCOS initramfs file from the URL stored in the RHCOS_URL variable: USD wget USD{RHCOS_URL}/rhcos-live-initramfs.ppc64le.img -o initramfs.img Enter the following command to navigate to the /var//var/www/html/install/ directory: USD cd /var//var/www/html/install/ Enter the following command to download, and save, the RHCOS root filesystem image file from the URL stored in the RHCOS_URL variable: USD wget USD{RHCOS_URL}/rhcos-live-rootfs.ppc64le.img -o rootfs.img To create the ignition file for a single-node OpenShift cluster, you must create the install-config.yaml file. Enter the following command to create the work directory that holds the file: USD mkdir -p ~/sno-work Enter the following command to navigate to the ~/sno-work directory: USD cd ~/sno-work Use the following sample file can to create the required install-config.yaml in the ~/sno-work directory: apiVersion: v1 baseDomain: <domain> 1 compute: - name: worker replicas: 0 2 controlPlane: name: master replicas: 1 3 metadata: name: <name> 4 networking: 5 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 6 networkType: OVNKubernetes serviceNetwork: - 172.30.0.0/16 platform: none: {} bootstrapInPlace: installationDisk: /dev/disk/by-id/<disk_id> 7 pullSecret: '<pull_secret>' 8 sshKey: | <ssh_key> 9 1 Add the cluster domain name. 2 Set the compute replicas to 0 . This makes the control plane node schedulable. 3 Set the controlPlane replicas to 1 . In conjunction with the compute setting, this setting ensures that the cluster runs on a single node. 4 Set the metadata name to the cluster name. 5 Set the networking details. OVN-Kubernetes is the only allowed network plugin type for single-node clusters. 6 Set the cidr value to match the subnet of the single-node OpenShift cluster. 7 Set the path to the installation disk drive, for example, /dev/disk/by-id/wwn-0x64cd98f04fde100024684cf3034da5c2 . 8 Copy the pull secret from Red Hat OpenShift Cluster Manager and add the contents to this configuration setting. 9 Add the public SSH key from the administration host so that you can log in to the cluster after installation. Download the openshift-install image to create the ignition file and copy it to the http directory. Enter the following command to download the openshift-install-linux-4.12.0 .tar file: USD wget https://mirror.openshift.com/pub/openshift-v4/ppc64le/clients/ocp/4.12.0/openshift-install-linux-4.12.0.tar.gz Enter the following command to unpack the openshift-install-linux-4.12.0.tar.gz archive: USD tar xzvf openshift-install-linux-4.12.0.tar.gz Enter the following command to USD ./openshift-install --dir=~/sno-work create create single-node-ignition-config Enter the following command to create the ignition file: USD cp ~/sno-work/single-node-ignition-config.ign /var/www/html/ignition/sno.ign Enter the following command to restore SELinux file for the /var/www/html directory: USD restorecon -vR /var/www/html || true Bastion now has all the required files and is properly configured in order to install single-node OpenShift. 2.8.2. Installing single-node OpenShift with IBM Power Prerequisites You have set up bastion. Procedure There are two steps for the single-node OpenShift cluster installation. First the single-node OpenShift logical partition (LPAR) needs to boot up with PXE, then you need to monitor the installation progress. Use the following command to boot powerVM with netboot: USD lpar_netboot -i -D -f -t ent -m <sno_mac> -s auto -d auto -S <server_ip> -C <sno_ip> -G <gateway> <lpar_name> default_profile <cec_name> where: sno_mac Specifies the MAC address of the single-node OpenShift cluster. sno_ip Specifies the IP address of the single-node OpenShift cluster. server_ip Specifies the IP address of bastion (PXE server). gateway Specifies the Network's gateway IP. lpar_name Specifies the single-node OpenShift lpar name in HMC. cec_name Specifies the System name where the sno_lpar resides After the single-node OpenShift LPAR boots up with PXE, use the openshift-install command to monitor the progress of installation: Run the following command after the bootstrap is complete: ./openshift-install wait-for bootstrap-complete Run the following command after it returns successfully: ./openshift-install wait-for install-complete
[ "example.com", "<cluster_name>.example.com", "export OCP_VERSION=<ocp_version> 1", "export ARCH=<architecture> 1", "curl -k https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDOCP_VERSION/openshift-client-linux.tar.gz -o oc.tar.gz", "tar zxf oc.tar.gz", "chmod +x oc", "curl -k https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDOCP_VERSION/openshift-install-linux.tar.gz -o openshift-install-linux.tar.gz", "tar zxvf openshift-install-linux.tar.gz", "chmod +x openshift-install", "export ISO_URL=USD(./openshift-install coreos print-stream-json | grep location | grep USDARCH | grep iso | cut -d\\\" -f4)", "curl -L USDISO_URL -o rhcos-live.iso", "apiVersion: v1 baseDomain: <domain> 1 compute: - name: worker replicas: 0 2 controlPlane: name: master replicas: 1 3 metadata: name: <name> 4 networking: 5 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 6 networkType: OVNKubernetes serviceNetwork: - 172.30.0.0/16 platform: none: {} bootstrapInPlace: installationDisk: /dev/disk/by-id/<disk_id> 7 pullSecret: '<pull_secret>' 8 sshKey: | <ssh_key> 9", "mkdir ocp", "cp install-config.yaml ocp", "./openshift-install --dir=ocp create single-node-ignition-config", "alias coreos-installer='podman run --privileged --pull always --rm -v /dev:/dev -v /run/udev:/run/udev -v USDPWD:/data -w /data quay.io/coreos/coreos-installer:release'", "coreos-installer iso ignition embed -fi ocp/bootstrap-in-place-for-live-iso.ign rhcos-live.iso", "./openshift-install --dir=ocp wait-for install-complete", "export KUBECONFIG=ocp/auth/kubeconfig", "oc get nodes", "NAME STATUS ROLES AGE VERSION control-plane.example.com Ready master,worker 10m v1.31.3", "dd if=<path_to_iso> of=<path_to_usb> status=progress", "curl -k -u <bmc_username>:<bmc_password> -d '{\"Image\":\"<hosted_iso_file>\", \"Inserted\": true}' -H \"Content-Type: application/json\" -X POST <host_bmc_address>/redfish/v1/Managers/iDRAC.Embedded.1/VirtualMedia/CD/Actions/VirtualMedia.InsertMedia", "curl -k -u <bmc_username>:<bmc_password> -X PATCH -H 'Content-Type: application/json' -d '{\"Boot\": {\"BootSourceOverrideTarget\": \"Cd\", \"BootSourceOverrideMode\": \"UEFI\", \"BootSourceOverrideEnabled\": \"Once\"}}' <host_bmc_address>/redfish/v1/Systems/System.Embedded.1", "curl -k -u <bmc_username>:<bmc_password> -d '{\"ResetType\": \"ForceRestart\"}' -H 'Content-type: application/json' -X POST <host_bmc_address>/redfish/v1/Systems/System.Embedded.1/Actions/ComputerSystem.Reset", "curl -k -u <bmc_username>:<bmc_password> -d '{\"ResetType\": \"On\"}' -H 'Content-type: application/json' -X POST <host_bmc_address>/redfish/v1/Systems/System.Embedded.1/Actions/ComputerSystem.Reset", "variant: openshift version: 4.18.0 metadata: name: sshd labels: machineconfiguration.openshift.io/role: worker passwd: users: - name: core 1 ssh_authorized_keys: - '<ssh_key>'", "butane -pr embedded.yaml -o embedded.ign", "coreos-installer iso ignition embed -i embedded.ign rhcos-4.18.0-x86_64-live.x86_64.iso -o rhcos-sshd-4.18.0-x86_64-live.x86_64.iso", "coreos-installer iso ignition show rhcos-sshd-4.18.0-x86_64-live.x86_64.iso", "{ \"ignition\": { \"version\": \"3.2.0\" }, \"passwd\": { \"users\": [ { \"name\": \"core\", \"sshAuthorizedKeys\": [ \"ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCZnG8AIzlDAhpyENpK2qKiTT8EbRWOrz7NXjRzopbPu215mocaJgjjwJjh1cYhgPhpAp6M/ttTk7I4OI7g4588Apx4bwJep6oWTU35LkY8ZxkGVPAJL8kVlTdKQviDv3XX12l4QfnDom4tm4gVbRH0gNT1wzhnLP+LKYm2Ohr9D7p9NBnAdro6k++XWgkDeijLRUTwdEyWunIdW1f8G0Mg8Y1Xzr13BUo3+8aey7HLKJMDtobkz/C8ESYA/f7HJc5FxF0XbapWWovSSDJrr9OmlL9f4TfE+cQk3s+eoKiz2bgNPRgEEwihVbGsCN4grA+RzLCAOpec+2dTJrQvFqsD [email protected]\" ] } ] } }", "OCP_VERSION=<ocp_version> 1", "ARCH=<architecture> 1", "curl -k https://mirror.openshift.com/pub/openshift-v4/USD{ARCH}/clients/ocp/USD{OCP_VERSION}/openshift-client-linux.tar.gz -o oc.tar.gz", "tar zxf oc.tar.gz", "chmod +x oc", "curl -k https://mirror.openshift.com/pub/openshift-v4/USD{ARCH}/clients/ocp/USD{OCP_VERSION}/openshift-install-linux.tar.gz -o openshift-install-linux.tar.gz", "tar zxvf openshift-install-linux.tar.gz", "chmod +x openshift-install", "apiVersion: v1 baseDomain: <domain> 1 compute: - name: worker replicas: 0 2 controlPlane: name: master replicas: 1 3 metadata: name: <name> 4 networking: 5 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 6 networkType: OVNKubernetes serviceNetwork: - 172.30.0.0/16 platform: none: {} bootstrapInPlace: installationDisk: /dev/disk/by-id/<disk_id> 7 pullSecret: '<pull_secret>' 8 sshKey: | <ssh_key> 9", "mkdir ocp", "cp install-config.yaml ocp", "./openshift-install --dir=ocp create single-node-ignition-config", "cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 ignition.firstboot ignition.platform.id=metal ignition.config.url=http://<http_server>:8080/ignition/bootstrap-in-place-for-live-iso.ign \\ 1 coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \\ 2 ip=<ip>::<gateway>:<mask>:<hostname>::none nameserver=<dns> \\ 3 rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 rd.dasd=0.0.4411 \\ 4 rd.zfcp=0.0.8001,0x50050763040051e3,0x4000406300000000 \\ 5 zfcp.allow_lun_scan=0", "cp ipl c", "cp i <devno> clear loadparm prompt", "cp vi vmsg 0 <kernel_parameters>", "cp set loaddev portname <wwpn> lun <lun>", "cp set loaddev bootprog <n>", "cp set loaddev scpdata {APPEND|NEW} '<kernel_parameters>'", "cp set loaddev scpdata 'rd.zfcp=0.0.8001,0x500507630a0350a4,0x4000409D00000000 ip=encbdd0:dhcp::02:00:00:02:34:02 rd.neednet=1'", "cp i <devno>", "OCP_VERSION=<ocp_version> 1", "ARCH=<architecture> 1", "curl -k https://mirror.openshift.com/pub/openshift-v4/USD{ARCH}/clients/ocp/USD{OCP_VERSION}/openshift-client-linux.tar.gz -o oc.tar.gz", "tar zxf oc.tar.gz", "chmod +x oc", "curl -k https://mirror.openshift.com/pub/openshift-v4/USD{ARCH}/clients/ocp/USD{OCP_VERSION}/openshift-install-linux.tar.gz -o openshift-install-linux.tar.gz", "tar zxvf openshift-install-linux.tar.gz", "chmod +x openshift-install", "apiVersion: v1 baseDomain: <domain> 1 compute: - name: worker replicas: 0 2 controlPlane: name: master replicas: 1 3 metadata: name: <name> 4 networking: 5 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 6 networkType: OVNKubernetes serviceNetwork: - 172.30.0.0/16 platform: none: {} bootstrapInPlace: installationDisk: /dev/disk/by-id/<disk_id> 7 pullSecret: '<pull_secret>' 8 sshKey: | <ssh_key> 9", "mkdir ocp", "cp install-config.yaml ocp", "./openshift-install --dir=ocp create single-node-ignition-config", "virt-install --name <vm_name> --autostart --memory=<memory_mb> --cpu host --vcpus <vcpus> --location <media_location>,kernel=<rhcos_kernel>,initrd=<rhcos_initrd> \\ 1 --disk size=100 --network network=<virt_network_parm> --graphics none --noautoconsole --extra-args \"rd.neednet=1 ignition.platform.id=metal ignition.firstboot\" --extra-args \"ignition.config.url=http://<http_server>/bootstrap.ign\" \\ 2 --extra-args \"coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img\" \\ 3 --extra-args \"ip=<ip>::<gateway>:<mask>:<hostname>::none\" \\ 4 --extra-args \"nameserver=<dns>\" --extra-args \"console=ttysclp0\" --wait", "OCP_VERSION=<ocp_version> 1", "ARCH=<architecture> 1", "curl -k https://mirror.openshift.com/pub/openshift-v4/USD{ARCH}/clients/ocp/USD{OCP_VERSION}/openshift-client-linux.tar.gz -o oc.tar.gz", "tar zxvf oc.tar.gz", "chmod +x oc", "curl -k https://mirror.openshift.com/pub/openshift-v4/USD{ARCH}/clients/ocp/USD{OCP_VERSION}/openshift-install-linux.tar.gz -o openshift-install-linux.tar.gz", "tar zxvf openshift-install-linux.tar.gz", "chmod +x openshift-install", "apiVersion: v1 baseDomain: <domain> 1 compute: - name: worker replicas: 0 2 controlPlane: name: master replicas: 1 3 metadata: name: <name> 4 networking: 5 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 6 networkType: OVNKubernetes serviceNetwork: - 172.30.0.0/16 platform: none: {} pullSecret: '<pull_secret>' 7 sshKey: | <ssh_key> 8", "mkdir ocp", "cp install-config.yaml ocp", "./openshift-install create manifests --dir <installation_directory> 1", "spec: mastersSchedulable: true status: {}", "./openshift-install create ignition-configs --dir <installation_directory> 1", "cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/<block_device> \\ 1 coreos.inst.ignition_url=http://<http_server>/bootstrap.ign \\ 2 coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \\ 3 ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> \\ 4 rd.znet=qeth,0.0.1140,0.0.1141,0.0.1142,layer2=1,portno=0 rd.dasd=0.0.4411 \\ 5 rd.zfcp=0.0.8001,0x50050763040051e3,0x4000406300000000 \\ 6 zfcp.allow_lun_scan=0", "cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/<block_device> coreos.inst.ignition_url=http://<http_server>/master.ign \\ 1 coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> rd.znet=qeth,0.0.1140,0.0.1141,0.0.1142,layer2=1,portno=0 rd.dasd=0.0.4411 rd.zfcp=0.0.8001,0x50050763040051e3,0x4000406300000000 zfcp.allow_lun_scan=0", "grub2-mknetdir --net-directory=/var/lib/tftpboot", "default=0 fallback=1 timeout=1 if [ USD{net_default_mac} == fa:b0:45:27:43:20 ]; then menuentry \"CoreOS (BIOS)\" { echo \"Loading kernel\" linux \"/rhcos/kernel\" ip=dhcp rd.neednet=1 ignition.platform.id=metal ignition.firstboot coreos.live.rootfs_url=http://192.168.10.5:8000/install/rootfs.img ignition.config.url=http://192.168.10.5:8000/ignition/sno.ign echo \"Loading initrd\" initrd \"/rhcos/initramfs.img\" } fi", "export RHCOS_URL=https://mirror.openshift.com/pub/openshift-v4/ppc64le/dependencies/rhcos/4.12/latest/", "cd /var/lib/tftpboot/rhcos", "wget USD{RHCOS_URL}/rhcos-live-kernel-ppc64le -o kernel", "wget USD{RHCOS_URL}/rhcos-live-initramfs.ppc64le.img -o initramfs.img", "cd /var//var/www/html/install/", "wget USD{RHCOS_URL}/rhcos-live-rootfs.ppc64le.img -o rootfs.img", "mkdir -p ~/sno-work", "cd ~/sno-work", "apiVersion: v1 baseDomain: <domain> 1 compute: - name: worker replicas: 0 2 controlPlane: name: master replicas: 1 3 metadata: name: <name> 4 networking: 5 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 6 networkType: OVNKubernetes serviceNetwork: - 172.30.0.0/16 platform: none: {} bootstrapInPlace: installationDisk: /dev/disk/by-id/<disk_id> 7 pullSecret: '<pull_secret>' 8 sshKey: | <ssh_key> 9", "wget https://mirror.openshift.com/pub/openshift-v4/ppc64le/clients/ocp/4.12.0/openshift-install-linux-4.12.0.tar.gz", "tar xzvf openshift-install-linux-4.12.0.tar.gz", "./openshift-install --dir=~/sno-work create create single-node-ignition-config", "cp ~/sno-work/single-node-ignition-config.ign /var/www/html/ignition/sno.ign", "restorecon -vR /var/www/html || true", "lpar_netboot -i -D -f -t ent -m <sno_mac> -s auto -d auto -S <server_ip> -C <sno_ip> -G <gateway> <lpar_name> default_profile <cec_name>", "./openshift-install wait-for bootstrap-complete", "./openshift-install wait-for install-complete" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/installing_on_a_single_node/install-sno-installing-sno
Appendix A. Revision History
Appendix A. Revision History Revision History Revision 0.3-06 Fri Aug 9 2019 Mirek Jahoda Version for 7.7 GA publication. Revision 0.3-05 Sat Oct 20 2018 Mirek Jahoda Version for 7.6 GA publication. Revision 0.3-03 Tue Apr 3 2018 Mirek Jahoda Version for 7.5 GA publication. Revision 0.3-01 Thu Jul 13 2017 Mirek Jahoda Version for 7.4 GA publication. Revision 0.2-18 Wed Nov 2 2016 Mirek Jahoda Version for 7.3 GA publication. Revision 0.2-11 Sun Jun 26 2016 Mirek Jahoda Async release with fixes. Revision 0.2-10 Sun Feb 14 2016 Robert Kratky Async release with fixes. Revision 0.2-9 Thu Dec 10 2015 Barbora Ancincova Added the Red Hat Gluster Storage chapter. Revision 0.2-8 Thu Nov 11 2015 Barbora Ancincova Red Hat Enterprise Linux 7.2 GA release of the book. Revision 0.2-7 Thu Aug 13 2015 Barbora Ancincova Red Hat Enterprise Linux 7.2 Beta release of the book. Revision 0.2-6 Wed Feb 18 2015 Barbora Ancincova Red Hat Enterprise Linux 7.1 GA release of the book. Revision 0.2-5 Fri Dec 05 2014 Barbora Ancincova Update to sort order on the Red Hat Customer Portal. Revision 0.2-4 Thu Dec 04 2014 Barbora Ancincova Red Hat Enterprise Linux 7.1 Beta release of the book. Revision 0.1-41 Tue May 20 2014 Tomas Capek Rebuild for style changes. Revision 0.1-1 Tue Jan 17 2013 Tomas Capek Initial creation of the book for Red Hat Enterprise Linux 7
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/selinux_users_and_administrators_guide/appe-documentation-selinux_users_and_administrators_guide-revision_history
25.3. Booleans
25.3. Booleans SELinux is based on the least level of access required for a service to run. Services can be run in a variety of ways; therefore, you need to specify how you run your services. Use the following Booleans to set up SELinux: openshift_use_nfs Having this Boolean enabled allows installing OpenShift on an NFS share. Note Due to the continuous development of the SELinux policy, the list above might not contain all Booleans related to the service at all times. To list them, enter the following command: Enter the following command to view description of a particular Boolean: Note that the additional policycoreutils-devel package providing the sepolicy utility is required for this command to work.
[ "~]USD getsebool -a | grep service_name", "~]USD sepolicy booleans -b boolean_name" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/selinux_users_and_administrators_guide/sect-managing_confined_services-openshift-booleans
Chapter 33. Security
Chapter 33. Security When firewalld starts, net.netfilter.nf_conntrack_max is no longer reset to default if its configuration exists Previously, firewalld reset the nf_conntrack settings to their default values when it was started or restarted. As a consequence, the net.netfilter.nf_conntrack_max setting was restored to its default value. With this update, each time firewalld starts, it reloads nf_conntrack sysctls as they are configured in /etc/sysctl.conf and /etc/sysctl.d . As a result, net.netfilter.nf_conntrack_max maintains the user-configured value. (BZ#1462977) Tomcat can now be started using tomcat-jsvc with SELinux in enforcing mode In Red Hat Enterprise Linux 7.4, the tomcat_t unconfined domain was not correctly defined in the SELinux policy. Consequently, the Tomcat server cannot be started by the tomcat-jsvc service with SELinux in enforcing mode. This update allows the tomcat_t domain to use the dac_override , setuid , and kill capability rules. As a result, Tomcat is now able to start through tomcat-jsvc with SELinux in enforcing mode. (BZ# 1470735 ) SELinux now allows vdsm to communicate with lldpad Prior to this update, SELinux in enforcing mode denied the vdsm daemon to access lldpad information. Consequently, vdsm was not able to work correctly. With this update, a rule to allow a virtd_t domain to send data to a lldpad_t domain through the dgram socket has been added to the selinux-policy packages. As a result, vdsm labeled as virtd_t can now communicate with lldpad labeled as lldpad_t if SELinux is set to enforcing mode. (BZ# 1472722 ) OpenSSH servers without Privilege Separation no longer crash Prior to this update, a pointer had been dereferenced before its validity was checked. Consequently, OpenSSH servers with the Privilege Separation option turned off crashed during the session cleanup. With this update, pointers are checked properly, and OpenSSH servers no longer crash while running without Privilege Separation due the described bug. Note that disabling OpenSSH Privilege Separation is not recommended. (BZ# 1488083 ) The clevis luks bind command no longer fails with the DISA STIG-compliant password policy Previously, passwords generated as part of the clevis luks bind command were not compliant with the Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) password policy set in the pwquality.conf file. Consequently, clevis luks bind failed on DISA STIG-compliant systems in certain cases. With this update, passwords are generated using a utility designed to generate random passwords that pass the password policy, and clevis luks bind now succeeds in the described scenario. (BZ# 1500975 ) WinSCP 5.10 now works properly with OpenSSH Previously, OpenSSH incorrectly recognized WinSCP version 5.10 as older version 5.1. As a consequence, the compatibility bits for WinSCP version 5.1 were enabled for WinSCP 5.10, and the newer version did not work properly with OpenSSH . With this update, the version selectors have been fixed, and WinSCP 5.10 now works properly with OpenSSH servers. (BZ# 1496808 ) SFTP no longer allows to create zero-length files in read-only mode Prior to this update, the process_open function in the OpenSSH SFTP server did not properly prevent write operations in read-only mode. Consequently, attackers were allowed to create zero-length files. With this update, the function has been fixed, and the SFTP server no longer allows any file creation in read-only mode. (BZ#1517226)
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.5_release_notes/bug_fixes_security
3.11. Cluster Networking
3.11. Cluster Networking Cluster level networking objects include: Clusters Logical Networks Figure 3.1. Networking within a cluster A data center is a logical grouping of multiple clusters and each cluster is a logical group of multiple hosts. Figure 3.1, "Networking within a cluster" depicts the contents of a single cluster. Hosts in a cluster all have access to the same storage domains. Hosts in a cluster also have logical networks applied at the cluster level. For a virtual machine logical network to become operational for use with virtual machines, the network must be defined and implemented for each host in the cluster using the Red Hat Virtualization Manager. Other logical network types can be implemented on only the hosts that use them. Multi-host network configuration automatically applies any updated network settings to all of the hosts within the data center to which the network is assigned.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/technical_reference/cluster_networking
C.2. Identity Management Log Files and Directories
C.2. Identity Management Log Files and Directories Table C.9. IdM Server and Client Log Files and Directories Directory or File Description /var/log/ipaserver-install.log The installation log for the IdM server. /var/log/ipareplica-install.log The installation log for the IdM replica. /var/log/ipaclient-install.log The installation log for the IdM client. /var/log/sssd/ Log files for SSSD. ~/.ipa/log/cli.log The log file for errors returned by XML-RPC calls and responses by the ipa utility. Created in the home directory for the system user who runs the tools, who might have a different user name than the IdM user. /etc/logrotate.d/ The log rotation policies for DNS, SSSD, Apache, Tomcat, and Kerberos. /etc/pki/pki-tomcat/logging.properties This link points to the default Certificate Authority logging configuration at /usr/share/pki/server/conf/logging.properties . Table C.10. Apache Server Log Files Directory or File Description /var/log/httpd/ Log files for the Apache web server. /var/log/httpd/access_log Standard access and error logs for Apache servers. Messages specific to IdM are recorded along with the Apache messages because the IdM web UI and the XML-RPC command-line interface use Apache. /var/log/httpd/error_log For details, see Log Files in the Apache documentation. Table C.11. Certificate System Log Files Directory or File Description /var/log/pki/pki-ca-spawn. time_of_installation .log The installation log for the IdM CA. /var/log/pki/pki-kra-spawn. time_of_installation .log The installation log for the IdM KRA. /var/log/pki/pki-tomcat/ The top level directory for PKI operation logs. Contains CA and KRA logs. /var/log/pki/pki-tomcat/ca/ Directory with logs related to certificate operations. In IdM, these logs are used for service principals, hosts, and other entities which use certificates. /var/log/pki/pki-tomcat/kra Directory with logs related to KRA. /var/log/messages Includes certificate error messages among other system messages. For details, see Configuring Subsystem Logs in the Red Hat Certificate System Administration Guide . Table C.12. Directory Server Log Files Directory or File Description /var/log/dirsrv/slapd- REALM_NAME / Log files associated with the Directory Server instance used by the IdM server. Most operational data recorded here are related to server-replica interactions. /var/log/dirsrv/slapd- REALM_NAME /access Contain detailed information about attempted access and operations for the domain Directory Server instance. /var/log/dirsrv/slapd- REALM_NAME /errors /var/log/dirsrv/slapd- REALM_NAME /audit Contains audit trails of all Directory Server operations when auditing is enabled in the Directory Server configuration. For details, see Monitoring Server and Database Activity and Log File Reference in the Red Hat Directory Server documentation. Table C.13. Kerberos Log Files Directory or File Description /var/log/krb5kdc.log The primary log file for the Kerberos KDC server. /var/log/kadmind.log The primary log file for the Kerberos administration server. Locations for these files is configured in the krb5.conf file. They can be different on some systems. Table C.14. DNS Log Files Directory or File Description /var/log/messages Includes DNS error messages among other system messages. DNS logging in this file is not enabled by default. To enable it, run the # /usr/sbin/rndc querylog command. To disable logging, run the command again. Table C.15. Custodia Log Files Directory or File Description /var/log/custodia/ Log file directory for the Custodia service. Additional Resources See Using the Journal in the System Administrator's Guide for information on how to use the journalctl utility. You can use journalctl to view the logging output of systemd unit files.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/log-file-ref
probe::signal.send_sig_queue.return
probe::signal.send_sig_queue.return Name probe::signal.send_sig_queue.return - Queuing a signal to a process completed Synopsis signal.send_sig_queue.return Values retstr Return value as a string name Name of the probe point
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-signal-send-sig-queue-return
1.3. Before Setting Up GFS2
1.3. Before Setting Up GFS2 Before you install and set up GFS2, note the following key characteristics of your GFS2 file systems: GFS2 nodes Determine which nodes in the cluster will mount the GFS2 file systems. Number of file systems Determine how many GFS2 file systems to create initially. (More file systems can be added later.) File system name Determine a unique name for each file system. The name must be unique for all lock_dlm file systems over the cluster. Each file system name is required in the form of a parameter variable. For example, this book uses file system names mydata1 and mydata2 in some example procedures. Journals Determine the number of journals for your GFS2 file systems. One journal is required for each node that mounts a GFS2 file system. GFS2 allows you to add journals dynamically at a later point as additional servers mount a file system. For information on adding journals to a GFS2 file system, see Section 3.6, "Adding Journals to a GFS2 File System" . Storage devices and partitions Determine the storage devices and partitions to be used for creating logical volumes (using CLVM) in the file systems. Time protocol Make sure that the clocks on the GFS2 nodes are synchronized. It is recommended that you use the Precision Time Protocol (PTP) or, if necessary for your configuration, the Network Time Protocol (NTP) software provided with your Red Hat Enterprise Linux distribution. Note The system clocks in GFS2 nodes must be within a few minutes of each other to prevent unnecessary inode time stamp updating. Unnecessary inode time stamp updating severely impacts cluster performance. Note You may see performance problems with GFS2 when many create and delete operations are issued from more than one node in the same directory at the same time. If this causes performance problems in your system, you should localize file creation and deletions by a node to directories specific to that node as much as possible. For further recommendations on creating, using, and maintaining a GFS2 file system. see Chapter 2, GFS2 Configuration and Operational Considerations .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/global_file_system_2/s1-ov-preconfig
probe::signal.pending
probe::signal.pending Name probe::signal.pending - Examining pending signal Synopsis signal.pending Values name Name of the probe point sigset_size The size of the user-space signal set sigset_add The address of the user-space signal set (sigset_t) Description This probe is used to examine a set of signals pending for delivery to a specific thread. This normally occurs when the do_sigpending kernel function is executed.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-signal-pending
Chapter 12. Monitoring bare-metal events with the Bare Metal Event Relay
Chapter 12. Monitoring bare-metal events with the Bare Metal Event Relay Important Bare Metal Event Relay is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 12.1. About bare-metal events Important The Bare Metal Event Relay Operator is deprecated. The ability to monitor bare-metal hosts by using the Bare Metal Event Relay Operator will be removed in a future OpenShift Container Platform release. Use the Bare Metal Event Relay to subscribe applications that run in your OpenShift Container Platform cluster to events that are generated on the underlying bare-metal host. The Redfish service publishes events on a node and transmits them on an advanced message queue to subscribed applications. Bare-metal events are based on the open Redfish standard that is developed under the guidance of the Distributed Management Task Force (DMTF). Redfish provides a secure industry-standard protocol with a REST API. The protocol is used for the management of distributed, converged or software-defined resources and infrastructure. Hardware-related events published through Redfish includes: Breaches of temperature limits Server status Fan status Begin using bare-metal events by deploying the Bare Metal Event Relay Operator and subscribing your application to the service. The Bare Metal Event Relay Operator installs and manages the lifecycle of the Redfish bare-metal event service. Note The Bare Metal Event Relay works only with Redfish-capable devices on single-node clusters provisioned on bare-metal infrastructure. 12.2. How bare-metal events work The Bare Metal Event Relay enables applications running on bare-metal clusters to respond quickly to Redfish hardware changes and failures such as breaches of temperature thresholds, fan failure, disk loss, power outages, and memory failure. These hardware events are delivered using an HTTP transport or AMQP mechanism. The latency of the messaging service is between 10 to 20 milliseconds. The Bare Metal Event Relay provides a publish-subscribe service for the hardware events. Applications can use a REST API to subscribe to the events. The Bare Metal Event Relay supports hardware that complies with Redfish OpenAPI v1.8 or later. 12.2.1. Bare Metal Event Relay data flow The following figure illustrates an example bare-metal events data flow: Figure 12.1. Bare Metal Event Relay data flow 12.2.1.1. Operator-managed pod The Operator uses custom resources to manage the pod containing the Bare Metal Event Relay and its components using the HardwareEvent CR. 12.2.1.2. Bare Metal Event Relay At startup, the Bare Metal Event Relay queries the Redfish API and downloads all the message registries, including custom registries. The Bare Metal Event Relay then begins to receive subscribed events from the Redfish hardware. The Bare Metal Event Relay enables applications running on bare-metal clusters to respond quickly to Redfish hardware changes and failures such as breaches of temperature thresholds, fan failure, disk loss, power outages, and memory failure. The events are reported using the HardwareEvent CR. 12.2.1.3. Cloud native event Cloud native events (CNE) is a REST API specification for defining the format of event data. 12.2.1.4. CNCF CloudEvents CloudEvents is a vendor-neutral specification developed by the Cloud Native Computing Foundation (CNCF) for defining the format of event data. 12.2.1.5. HTTP transport or AMQP dispatch router The HTTP transport or AMQP dispatch router is responsible for the message delivery service between publisher and subscriber. Note HTTP transport is the default transport for PTP and bare-metal events. Use HTTP transport instead of AMQP for PTP and bare-metal events where possible. AMQ Interconnect is EOL from 30 June 2024. Extended life cycle support (ELS) for AMQ Interconnect ends 29 November 2029. For more information see, Red Hat AMQ Interconnect support status . 12.2.1.6. Cloud event proxy sidecar The cloud event proxy sidecar container image is based on the O-RAN API specification and provides a publish-subscribe event framework for hardware events. 12.2.2. Redfish message parsing service In addition to handling Redfish events, the Bare Metal Event Relay provides message parsing for events without a Message property. The proxy downloads all the Redfish message registries including vendor specific registries from the hardware when it starts. If an event does not contain a Message property, the proxy uses the Redfish message registries to construct the Message and Resolution properties and add them to the event before passing the event to the cloud events framework. This service allows Redfish events to have smaller message size and lower transmission latency. 12.2.3. Installing the Bare Metal Event Relay using the CLI As a cluster administrator, you can install the Bare Metal Event Relay Operator by using the CLI. Prerequisites A cluster that is installed on bare-metal hardware with nodes that have a RedFish-enabled Baseboard Management Controller (BMC). Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create a namespace for the Bare Metal Event Relay. Save the following YAML in the bare-metal-events-namespace.yaml file: apiVersion: v1 kind: Namespace metadata: name: openshift-bare-metal-events labels: name: openshift-bare-metal-events openshift.io/cluster-monitoring: "true" Create the Namespace CR: USD oc create -f bare-metal-events-namespace.yaml Create an Operator group for the Bare Metal Event Relay Operator. Save the following YAML in the bare-metal-events-operatorgroup.yaml file: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: bare-metal-event-relay-group namespace: openshift-bare-metal-events spec: targetNamespaces: - openshift-bare-metal-events Create the OperatorGroup CR: USD oc create -f bare-metal-events-operatorgroup.yaml Subscribe to the Bare Metal Event Relay. Save the following YAML in the bare-metal-events-sub.yaml file: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: bare-metal-event-relay-subscription namespace: openshift-bare-metal-events spec: channel: "stable" name: bare-metal-event-relay source: redhat-operators sourceNamespace: openshift-marketplace Create the Subscription CR: USD oc create -f bare-metal-events-sub.yaml Verification To verify that the Bare Metal Event Relay Operator is installed, run the following command: USD oc get csv -n openshift-bare-metal-events -o custom-columns=Name:.metadata.name,Phase:.status.phase 12.2.4. Installing the Bare Metal Event Relay using the web console As a cluster administrator, you can install the Bare Metal Event Relay Operator using the web console. Prerequisites A cluster that is installed on bare-metal hardware with nodes that have a RedFish-enabled Baseboard Management Controller (BMC). Log in as a user with cluster-admin privileges. Procedure Install the Bare Metal Event Relay using the OpenShift Container Platform web console: In the OpenShift Container Platform web console, click Operators OperatorHub . Choose Bare Metal Event Relay from the list of available Operators, and then click Install . On the Install Operator page, select or create a Namespace , select openshift-bare-metal-events , and then click Install . Verification Optional: You can verify that the Operator installed successfully by performing the following check: Switch to the Operators Installed Operators page. Ensure that Bare Metal Event Relay is listed in the project with a Status of InstallSucceeded . Note During installation an Operator might display a Failed status. If the installation later succeeds with an InstallSucceeded message, you can ignore the Failed message. If the Operator does not appear as installed, to troubleshoot further: Go to the Operators Installed Operators page and inspect the Operator Subscriptions and Install Plans tabs for any failure or errors under Status . Go to the Workloads Pods page and check the logs for pods in the project namespace. 12.3. Installing the AMQ messaging bus To pass Redfish bare-metal event notifications between publisher and subscriber on a node, you can install and configure an AMQ messaging bus to run locally on the node. You do this by installing the AMQ Interconnect Operator for use in the cluster. Note HTTP transport is the default transport for PTP and bare-metal events. Use HTTP transport instead of AMQP for PTP and bare-metal events where possible. AMQ Interconnect is EOL from 30 June 2024. Extended life cycle support (ELS) for AMQ Interconnect ends 29 November 2029. For more information see, Red Hat AMQ Interconnect support status . Prerequisites Install the OpenShift Container Platform CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Install the AMQ Interconnect Operator to its own amq-interconnect namespace. See Installing the AMQ Interconnect Operator . Verification Verify that the AMQ Interconnect Operator is available and the required pods are running: USD oc get pods -n amq-interconnect Example output NAME READY STATUS RESTARTS AGE amq-interconnect-645db76c76-k8ghs 1/1 Running 0 23h interconnect-operator-5cb5fc7cc-4v7qm 1/1 Running 0 23h Verify that the required bare-metal-event-relay bare-metal event producer pod is running in the openshift-bare-metal-events namespace: USD oc get pods -n openshift-bare-metal-events Example output NAME READY STATUS RESTARTS AGE hw-event-proxy-operator-controller-manager-74d5649b7c-dzgtl 2/2 Running 0 25s 12.4. Subscribing to Redfish BMC bare-metal events for a cluster node You can subscribe to Redfish BMC events generated on a node in your cluster by creating a BMCEventSubscription custom resource (CR) for the node, creating a HardwareEvent CR for the event, and creating a Secret CR for the BMC. 12.4.1. Subscribing to bare-metal events You can configure the baseboard management controller (BMC) to send bare-metal events to subscribed applications running in an OpenShift Container Platform cluster. Example Redfish bare-metal events include an increase in device temperature, or removal of a device. You subscribe applications to bare-metal events using a REST API. Important You can only create a BMCEventSubscription custom resource (CR) for physical hardware that supports Redfish and has a vendor interface set to redfish or idrac-redfish . Note Use the BMCEventSubscription CR to subscribe to predefined Redfish events. The Redfish standard does not provide an option to create specific alerts and thresholds. For example, to receive an alert event when an enclosure's temperature exceeds 40deg Celsius, you must manually configure the event according to the vendor's recommendations. Perform the following procedure to subscribe to bare-metal events for the node using a BMCEventSubscription CR. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Get the user name and password for the BMC. Deploy a bare-metal node with a Redfish-enabled Baseboard Management Controller (BMC) in your cluster, and enable Redfish events on the BMC. Note Enabling Redfish events on specific hardware is outside the scope of this information. For more information about enabling Redfish events for your specific hardware, consult the BMC manufacturer documentation. Procedure Confirm that the node hardware has the Redfish EventService enabled by running the following curl command: USD curl https://<bmc_ip_address>/redfish/v1/EventService --insecure -H 'Content-Type: application/json' -u "<bmc_username>:<password>" where: bmc_ip_address is the IP address of the BMC where the Redfish events are generated. Example output { "@odata.context": "/redfish/v1/USDmetadata#EventService.EventService", "@odata.id": "/redfish/v1/EventService", "@odata.type": "#EventService.v1_0_2.EventService", "Actions": { "#EventService.SubmitTestEvent": { "[email protected]": ["StatusChange", "ResourceUpdated", "ResourceAdded", "ResourceRemoved", "Alert"], "target": "/redfish/v1/EventService/Actions/EventService.SubmitTestEvent" } }, "DeliveryRetryAttempts": 3, "DeliveryRetryIntervalSeconds": 30, "Description": "Event Service represents the properties for the service", "EventTypesForSubscription": ["StatusChange", "ResourceUpdated", "ResourceAdded", "ResourceRemoved", "Alert"], "[email protected]": 5, "Id": "EventService", "Name": "Event Service", "ServiceEnabled": true, "Status": { "Health": "OK", "HealthRollup": "OK", "State": "Enabled" }, "Subscriptions": { "@odata.id": "/redfish/v1/EventService/Subscriptions" } } Get the Bare Metal Event Relay service route for the cluster by running the following command: USD oc get route -n openshift-bare-metal-events Example output NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD hw-event-proxy hw-event-proxy-openshift-bare-metal-events.apps.compute-1.example.com hw-event-proxy-service 9087 edge None Create a BMCEventSubscription resource to subscribe to the Redfish events: Save the following YAML in the bmc_sub.yaml file: apiVersion: metal3.io/v1alpha1 kind: BMCEventSubscription metadata: name: sub-01 namespace: openshift-machine-api spec: hostName: <hostname> 1 destination: <proxy_service_url> 2 context: '' 1 Specifies the name or UUID of the worker node where the Redfish events are generated. 2 Specifies the bare-metal event proxy service, for example, https://hw-event-proxy-openshift-bare-metal-events.apps.compute-1.example.com/webhook . Create the BMCEventSubscription CR: USD oc create -f bmc_sub.yaml Optional: To delete the BMC event subscription, run the following command: USD oc delete -f bmc_sub.yaml Optional: To manually create a Redfish event subscription without creating a BMCEventSubscription CR, run the following curl command, specifying the BMC username and password. USD curl -i -k -X POST -H "Content-Type: application/json" -d '{"Destination": "https://<proxy_service_url>", "Protocol" : "Redfish", "EventTypes": ["Alert"], "Context": "root"}' -u <bmc_username>:<password> 'https://<bmc_ip_address>/redfish/v1/EventService/Subscriptions' -v where: proxy_service_url is the bare-metal event proxy service, for example, https://hw-event-proxy-openshift-bare-metal-events.apps.compute-1.example.com/webhook . bmc_ip_address is the IP address of the BMC where the Redfish events are generated. Example output HTTP/1.1 201 Created Server: AMI MegaRAC Redfish Service Location: /redfish/v1/EventService/Subscriptions/1 Allow: GET, POST Access-Control-Allow-Origin: * Access-Control-Expose-Headers: X-Auth-Token Access-Control-Allow-Headers: X-Auth-Token Access-Control-Allow-Credentials: true Cache-Control: no-cache, must-revalidate Link: <http://redfish.dmtf.org/schemas/v1/EventDestination.v1_6_0.json>; rel=describedby Link: <http://redfish.dmtf.org/schemas/v1/EventDestination.v1_6_0.json> Link: </redfish/v1/EventService/Subscriptions>; path= ETag: "1651135676" Content-Type: application/json; charset=UTF-8 OData-Version: 4.0 Content-Length: 614 Date: Thu, 28 Apr 2022 08:47:57 GMT 12.4.2. Querying Redfish bare-metal event subscriptions with curl Some hardware vendors limit the amount of Redfish hardware event subscriptions. You can query the number of Redfish event subscriptions by using curl . Prerequisites Get the user name and password for the BMC. Deploy a bare-metal node with a Redfish-enabled Baseboard Management Controller (BMC) in your cluster, and enable Redfish hardware events on the BMC. Procedure Check the current subscriptions for the BMC by running the following curl command: USD curl --globoff -H "Content-Type: application/json" -k -X GET --user <bmc_username>:<password> https://<bmc_ip_address>/redfish/v1/EventService/Subscriptions where: bmc_ip_address is the IP address of the BMC where the Redfish events are generated. Example output % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 435 100 435 0 0 399 0 0:00:01 0:00:01 --:--:-- 399 { "@odata.context": "/redfish/v1/USDmetadata#EventDestinationCollection.EventDestinationCollection", "@odata.etag": "" 1651137375 "", "@odata.id": "/redfish/v1/EventService/Subscriptions", "@odata.type": "#EventDestinationCollection.EventDestinationCollection", "Description": "Collection for Event Subscriptions", "Members": [ { "@odata.id": "/redfish/v1/EventService/Subscriptions/1" }], "[email protected]": 1, "Name": "Event Subscriptions Collection" } In this example, a single subscription is configured: /redfish/v1/EventService/Subscriptions/1 . Optional: To remove the /redfish/v1/EventService/Subscriptions/1 subscription with curl , run the following command, specifying the BMC username and password: USD curl --globoff -L -w "%{http_code} %{url_effective}\n" -k -u <bmc_username>:<password >-H "Content-Type: application/json" -d '{}' -X DELETE https://<bmc_ip_address>/redfish/v1/EventService/Subscriptions/1 where: bmc_ip_address is the IP address of the BMC where the Redfish events are generated. 12.4.3. Creating the bare-metal event and Secret CRs To start using bare-metal events, create the HardwareEvent custom resource (CR) for the host where the Redfish hardware is present. Hardware events and faults are reported in the hw-event-proxy logs. Prerequisites You have installed the OpenShift Container Platform CLI ( oc ). You have logged in as a user with cluster-admin privileges. You have installed the Bare Metal Event Relay. You have created a BMCEventSubscription CR for the BMC Redfish hardware. Procedure Create the HardwareEvent custom resource (CR): Note Multiple HardwareEvent resources are not permitted. Save the following YAML in the hw-event.yaml file: apiVersion: "event.redhat-cne.org/v1alpha1" kind: "HardwareEvent" metadata: name: "hardware-event" spec: nodeSelector: node-role.kubernetes.io/hw-event: "" 1 logLevel: "debug" 2 msgParserTimeout: "10" 3 1 Required. Use the nodeSelector field to target nodes with the specified label, for example, node-role.kubernetes.io/hw-event: "" . Note In OpenShift Container Platform 4.13 or later, you do not need to set the spec.transportHost field in the HardwareEvent resource when you use HTTP transport for bare-metal events. Set transportHost only when you use AMQP transport for bare-metal events. 2 Optional. The default value is debug . Sets the log level in hw-event-proxy logs. The following log levels are available: fatal , error , warning , info , debug , trace . 3 Optional. Sets the timeout value in milliseconds for the Message Parser. If a message parsing request is not responded to within the timeout duration, the original hardware event message is passed to the cloud native event framework. The default value is 10. Apply the HardwareEvent CR in the cluster: USD oc create -f hardware-event.yaml Create a BMC username and password Secret CR that enables the hardware events proxy to access the Redfish message registry for the bare-metal host. Save the following YAML in the hw-event-bmc-secret.yaml file: apiVersion: v1 kind: Secret metadata: name: redfish-basic-auth type: Opaque stringData: 1 username: <bmc_username> password: <bmc_password> # BMC host DNS or IP address hostaddr: <bmc_host_ip_address> 1 Enter plain text values for the various items under stringData . Create the Secret CR: USD oc create -f hw-event-bmc-secret.yaml Additional resources Persistent storage using local volumes 12.5. Subscribing applications to bare-metal events REST API reference Use the bare-metal events REST API to subscribe an application to the bare-metal events that are generated on the parent node. Subscribe applications to Redfish events by using the resource address /cluster/node/<node_name>/redfish/event , where <node_name> is the cluster node running the application. Deploy your cloud-event-consumer application container and cloud-event-proxy sidecar container in a separate application pod. The cloud-event-consumer application subscribes to the cloud-event-proxy container in the application pod. Use the following API endpoints to subscribe the cloud-event-consumer application to Redfish events posted by the cloud-event-proxy container at http://localhost:8089/api/ocloudNotifications/v1/ in the application pod: /api/ocloudNotifications/v1/subscriptions POST : Creates a new subscription GET : Retrieves a list of subscriptions /api/ocloudNotifications/v1/subscriptions/<subscription_id> PUT : Creates a new status ping request for the specified subscription ID /api/ocloudNotifications/v1/health GET : Returns the health status of ocloudNotifications API Note 9089 is the default port for the cloud-event-consumer container deployed in the application pod. You can configure a different port for your application as required. api/ocloudNotifications/v1/subscriptions HTTP method GET api/ocloudNotifications/v1/subscriptions Description Returns a list of subscriptions. If subscriptions exist, a 200 OK status code is returned along with the list of subscriptions. Example API response [ { "id": "ca11ab76-86f9-428c-8d3a-666c24e34d32", "endpointUri": "http://localhost:9089/api/ocloudNotifications/v1/dummy", "uriLocation": "http://localhost:8089/api/ocloudNotifications/v1/subscriptions/ca11ab76-86f9-428c-8d3a-666c24e34d32", "resource": "/cluster/node/openshift-worker-0.openshift.example.com/redfish/event" } ] HTTP method POST api/ocloudNotifications/v1/subscriptions Description Creates a new subscription. If a subscription is successfully created, or if it already exists, a 201 Created status code is returned. Table 12.1. Query parameters Parameter Type subscription data Example payload { "uriLocation": "http://localhost:8089/api/ocloudNotifications/v1/subscriptions", "resource": "/cluster/node/openshift-worker-0.openshift.example.com/redfish/event" } api/ocloudNotifications/v1/subscriptions/<subscription_id> HTTP method GET api/ocloudNotifications/v1/subscriptions/<subscription_id> Description Returns details for the subscription with ID <subscription_id> Table 12.2. Query parameters Parameter Type <subscription_id> string Example API response { "id":"ca11ab76-86f9-428c-8d3a-666c24e34d32", "endpointUri":"http://localhost:9089/api/ocloudNotifications/v1/dummy", "uriLocation":"http://localhost:8089/api/ocloudNotifications/v1/subscriptions/ca11ab76-86f9-428c-8d3a-666c24e34d32", "resource":"/cluster/node/openshift-worker-0.openshift.example.com/redfish/event" } api/ocloudNotifications/v1/health/ HTTP method GET api/ocloudNotifications/v1/health/ Description Returns the health status for the ocloudNotifications REST API. Example API response OK 12.6. Migrating consumer applications to use HTTP transport for PTP or bare-metal events If you have previously deployed PTP or bare-metal events consumer applications, you need to update the applications to use HTTP message transport. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in as a user with cluster-admin privileges. You have updated the PTP Operator or Bare Metal Event Relay to version 4.13+ which uses HTTP transport by default. Procedure Update your events consumer application to use HTTP transport. Set the http-event-publishers variable for the cloud event sidecar deployment. For example, in a cluster with PTP events configured, the following YAML snippet illustrates a cloud event sidecar deployment: containers: - name: cloud-event-sidecar image: cloud-event-sidecar args: - "--metrics-addr=127.0.0.1:9091" - "--store-path=/store" - "--transport-host=consumer-events-subscription-service.cloud-events.svc.cluster.local:9043" - "--http-event-publishers=ptp-event-publisher-service-NODE_NAME.openshift-ptp.svc.cluster.local:9043" 1 - "--api-port=8089" 1 The PTP Operator automatically resolves NODE_NAME to the host that is generating the PTP events. For example, compute-1.example.com . In a cluster with bare-metal events configured, set the http-event-publishers field to hw-event-publisher-service.openshift-bare-metal-events.svc.cluster.local:9043 in the cloud event sidecar deployment CR. Deploy the consumer-events-subscription-service service alongside the events consumer application. For example: apiVersion: v1 kind: Service metadata: annotations: prometheus.io/scrape: "true" service.alpha.openshift.io/serving-cert-secret-name: sidecar-consumer-secret name: consumer-events-subscription-service namespace: cloud-events labels: app: consumer-service spec: ports: - name: sub-port port: 9043 selector: app: consumer clusterIP: None sessionAffinity: None type: ClusterIP
[ "apiVersion: v1 kind: Namespace metadata: name: openshift-bare-metal-events labels: name: openshift-bare-metal-events openshift.io/cluster-monitoring: \"true\"", "oc create -f bare-metal-events-namespace.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: bare-metal-event-relay-group namespace: openshift-bare-metal-events spec: targetNamespaces: - openshift-bare-metal-events", "oc create -f bare-metal-events-operatorgroup.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: bare-metal-event-relay-subscription namespace: openshift-bare-metal-events spec: channel: \"stable\" name: bare-metal-event-relay source: redhat-operators sourceNamespace: openshift-marketplace", "oc create -f bare-metal-events-sub.yaml", "oc get csv -n openshift-bare-metal-events -o custom-columns=Name:.metadata.name,Phase:.status.phase", "oc get pods -n amq-interconnect", "NAME READY STATUS RESTARTS AGE amq-interconnect-645db76c76-k8ghs 1/1 Running 0 23h interconnect-operator-5cb5fc7cc-4v7qm 1/1 Running 0 23h", "oc get pods -n openshift-bare-metal-events", "NAME READY STATUS RESTARTS AGE hw-event-proxy-operator-controller-manager-74d5649b7c-dzgtl 2/2 Running 0 25s", "curl https://<bmc_ip_address>/redfish/v1/EventService --insecure -H 'Content-Type: application/json' -u \"<bmc_username>:<password>\"", "{ \"@odata.context\": \"/redfish/v1/USDmetadata#EventService.EventService\", \"@odata.id\": \"/redfish/v1/EventService\", \"@odata.type\": \"#EventService.v1_0_2.EventService\", \"Actions\": { \"#EventService.SubmitTestEvent\": { \"[email protected]\": [\"StatusChange\", \"ResourceUpdated\", \"ResourceAdded\", \"ResourceRemoved\", \"Alert\"], \"target\": \"/redfish/v1/EventService/Actions/EventService.SubmitTestEvent\" } }, \"DeliveryRetryAttempts\": 3, \"DeliveryRetryIntervalSeconds\": 30, \"Description\": \"Event Service represents the properties for the service\", \"EventTypesForSubscription\": [\"StatusChange\", \"ResourceUpdated\", \"ResourceAdded\", \"ResourceRemoved\", \"Alert\"], \"[email protected]\": 5, \"Id\": \"EventService\", \"Name\": \"Event Service\", \"ServiceEnabled\": true, \"Status\": { \"Health\": \"OK\", \"HealthRollup\": \"OK\", \"State\": \"Enabled\" }, \"Subscriptions\": { \"@odata.id\": \"/redfish/v1/EventService/Subscriptions\" } }", "oc get route -n openshift-bare-metal-events", "NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD hw-event-proxy hw-event-proxy-openshift-bare-metal-events.apps.compute-1.example.com hw-event-proxy-service 9087 edge None", "apiVersion: metal3.io/v1alpha1 kind: BMCEventSubscription metadata: name: sub-01 namespace: openshift-machine-api spec: hostName: <hostname> 1 destination: <proxy_service_url> 2 context: ''", "oc create -f bmc_sub.yaml", "oc delete -f bmc_sub.yaml", "curl -i -k -X POST -H \"Content-Type: application/json\" -d '{\"Destination\": \"https://<proxy_service_url>\", \"Protocol\" : \"Redfish\", \"EventTypes\": [\"Alert\"], \"Context\": \"root\"}' -u <bmc_username>:<password> 'https://<bmc_ip_address>/redfish/v1/EventService/Subscriptions' -v", "HTTP/1.1 201 Created Server: AMI MegaRAC Redfish Service Location: /redfish/v1/EventService/Subscriptions/1 Allow: GET, POST Access-Control-Allow-Origin: * Access-Control-Expose-Headers: X-Auth-Token Access-Control-Allow-Headers: X-Auth-Token Access-Control-Allow-Credentials: true Cache-Control: no-cache, must-revalidate Link: <http://redfish.dmtf.org/schemas/v1/EventDestination.v1_6_0.json>; rel=describedby Link: <http://redfish.dmtf.org/schemas/v1/EventDestination.v1_6_0.json> Link: </redfish/v1/EventService/Subscriptions>; path= ETag: \"1651135676\" Content-Type: application/json; charset=UTF-8 OData-Version: 4.0 Content-Length: 614 Date: Thu, 28 Apr 2022 08:47:57 GMT", "curl --globoff -H \"Content-Type: application/json\" -k -X GET --user <bmc_username>:<password> https://<bmc_ip_address>/redfish/v1/EventService/Subscriptions", "% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 435 100 435 0 0 399 0 0:00:01 0:00:01 --:--:-- 399 { \"@odata.context\": \"/redfish/v1/USDmetadata#EventDestinationCollection.EventDestinationCollection\", \"@odata.etag\": \"\" 1651137375 \"\", \"@odata.id\": \"/redfish/v1/EventService/Subscriptions\", \"@odata.type\": \"#EventDestinationCollection.EventDestinationCollection\", \"Description\": \"Collection for Event Subscriptions\", \"Members\": [ { \"@odata.id\": \"/redfish/v1/EventService/Subscriptions/1\" }], \"[email protected]\": 1, \"Name\": \"Event Subscriptions Collection\" }", "curl --globoff -L -w \"%{http_code} %{url_effective}\\n\" -k -u <bmc_username>:<password >-H \"Content-Type: application/json\" -d '{}' -X DELETE https://<bmc_ip_address>/redfish/v1/EventService/Subscriptions/1", "apiVersion: \"event.redhat-cne.org/v1alpha1\" kind: \"HardwareEvent\" metadata: name: \"hardware-event\" spec: nodeSelector: node-role.kubernetes.io/hw-event: \"\" 1 logLevel: \"debug\" 2 msgParserTimeout: \"10\" 3", "oc create -f hardware-event.yaml", "apiVersion: v1 kind: Secret metadata: name: redfish-basic-auth type: Opaque stringData: 1 username: <bmc_username> password: <bmc_password> # BMC host DNS or IP address hostaddr: <bmc_host_ip_address>", "oc create -f hw-event-bmc-secret.yaml", "[ { \"id\": \"ca11ab76-86f9-428c-8d3a-666c24e34d32\", \"endpointUri\": \"http://localhost:9089/api/ocloudNotifications/v1/dummy\", \"uriLocation\": \"http://localhost:8089/api/ocloudNotifications/v1/subscriptions/ca11ab76-86f9-428c-8d3a-666c24e34d32\", \"resource\": \"/cluster/node/openshift-worker-0.openshift.example.com/redfish/event\" } ]", "{ \"uriLocation\": \"http://localhost:8089/api/ocloudNotifications/v1/subscriptions\", \"resource\": \"/cluster/node/openshift-worker-0.openshift.example.com/redfish/event\" }", "{ \"id\":\"ca11ab76-86f9-428c-8d3a-666c24e34d32\", \"endpointUri\":\"http://localhost:9089/api/ocloudNotifications/v1/dummy\", \"uriLocation\":\"http://localhost:8089/api/ocloudNotifications/v1/subscriptions/ca11ab76-86f9-428c-8d3a-666c24e34d32\", \"resource\":\"/cluster/node/openshift-worker-0.openshift.example.com/redfish/event\" }", "OK", "containers: - name: cloud-event-sidecar image: cloud-event-sidecar args: - \"--metrics-addr=127.0.0.1:9091\" - \"--store-path=/store\" - \"--transport-host=consumer-events-subscription-service.cloud-events.svc.cluster.local:9043\" - \"--http-event-publishers=ptp-event-publisher-service-NODE_NAME.openshift-ptp.svc.cluster.local:9043\" 1 - \"--api-port=8089\"", "apiVersion: v1 kind: Service metadata: annotations: prometheus.io/scrape: \"true\" service.alpha.openshift.io/serving-cert-secret-name: sidecar-consumer-secret name: consumer-events-subscription-service namespace: cloud-events labels: app: consumer-service spec: ports: - name: sub-port port: 9043 selector: app: consumer clusterIP: None sessionAffinity: None type: ClusterIP" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/scalability_and_performance/using-rfhe
Chapter 1. Getting started with Fuse on JBoss EAP
Chapter 1. Getting started with Fuse on JBoss EAP This chapter introduces Fuse on JBoss EAP, and explains how to install, develop, and build your first Fuse application on a JBoss EAP container. See the following topics for details: Section 1.1, "About Fuse on JBoss EAP" Section 1.2, "Installing Fuse on JBoss EAP" Section 1.3, "Building your first Fuse application on JBoss EAP" 1.1. About Fuse on JBoss EAP JBoss Enterprise Application Platform (EAP), based on Jakarta EE technology (previously, Java EE) from the Eclipse Foundation , was originally created to address use cases for developing enterprise applications. JBoss EAP is characterized by well-defined patterns for implementing services and standardized Java APIs (for example, for persistence, messaging, security, and so on). In recent years, this technology has evolved to be more lightweight, with the introduction of CDI for dependency injection and simplified annotations for enterprise Java beans. Distinctive features of this container technology are: Particularly suited to running in standalone mode. Many standard services (for example, persistence, messaging, security, and so on) pre-configured and provided out-of the-box. Application WARs typically small and lightweight (because many dependencies are pre-installed in the container). Standardized, backward-compatible Java APIs. 1.2. Installing Fuse on JBoss EAP The standard installation package for Fuse 7.13 on JBoss EAP is available for download from the Red Hat Customer Portal. It installs the standard assembly of the JBoss EAP container, and provides the full Fuse technology stack. Prerequisites You must have a full-subscription account on the Red Hat Customer Portal . You must be logged into the customer portal. You must have downloaded JBoss EAP . You must have downloaded Fuse on JBoss EAP . You must have downloaded the Fuse on JBoss EAP Update 16 . Procedure Run the JBoss EAP installer from a shell prompt, as follows: During installation: Accept the terms and conditions. Choose your preferred installation path, EAP_INSTALL , for the JBoss EAP runtime. Create an administrative user and make a careful note of these administrative user credentials for later. You can accept the default settings on the remaining screens. Open a shell prompt and change directory to EAP_INSTALL . From the EAP_INSTALL directory, run the Fuse on EAP installer, as follows: (Optional) In order to use Apache Maven from the command line, you need to install and configure Maven as described in as described in Setting up Maven locally ). Apply the Fuse on JBoss EAP Update 16 patch. For full instructions, see Red Hat JBoss EAP Patching and Upgrading Guide . 1.3. Building your first Fuse application on JBoss EAP This set of instructions assists you in building your first Fuse application on JBoss EAP. Prerequisites You need a full-subscription account on the Red Hat Customer Portal . You must be logged into the customer portal. You must have downloaded and successfully installed Fuse on JBoss EAP . You must have downloaded and successfully installed the Joss Tools installer . Procedure In your IDE environment, create a new project, as follows: Select File->New->Fuse Integration Project . In the Project Name field, enter eap-camel . Click . In the Select a Target Environment pane, choose the following settings: Select Standalone as the deployment platform. Select Wildfly/Fuse on EAP as the runtime environment and use the Runtime (optional) dropdown menu to select the JBoss EAP 7.x Runtime server as the target runtime. After selecting the target runtime, the Camel Version is automatically selected for you and the field is grayed out. Click . In the Advanced Project Setup pane, select the Spring Bean - Spring DSL template. Click Finish . Important If this is the first time you are building a Fuse project, it will take several minutes for the wizard to finish generating the project. This is because it downloads dependencies from remote Maven repositories. Do not interrupt the wizard or close the window while the project is building in the background. If prompted to open the associated Fuse Integration perspective, click Yes . Wait while JBoss Tools downloads required artifacts and builds the project in the background. Deploy the project to the server, as follows: In the Servers view (bottom right corner of the Fuse Integration perspective), if the server is not already started, select the Red Hat JBoss EAP 7.4 Runtime server and click the green arrow to start it. Wait until you see a message like the following in the Console view: After the server has started, switch back to the Servers view, right-click the server and select Add and Remove from the context menu. In the Add and Remove dialog, select the eap-camel project and click Add > . Click Finish . Verify that the project is working, as follows: Browse to the following URL to access the service running in the eap-camel project: http://localhost:8080/camel-test-spring?name=Kermit The browser window should show the response Hello Kermit . Undeploy the project, as follows: In the Servers view, select the Red Hat JBoss EAP 7.4 Runtime server. Right-click the server and select Add and Remove from the context menu. In the Add and Remove dialog, select your eap-camel project and click < Remove . Click Finish .
[ "java -jar DOWNLOAD_LOCATION/jboss-eap-7.4.16-installer.jar", "java -jar DOWNLOAD_LOCATION/fuse-eap-installer-7.13.0.jar", "14:47:07,283 INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0025: JBoss EAP 7.4.0.GA (WildFly Core 10.1.11.Final-redhat-00001) started in 3301ms - Started 314 of 576 services (369 services are lazy, passive or on-demand)" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/getting_started_with_fuse_on_jboss_eap/getting-started-with-fuse-on-jboss-eap
Chapter 4. Importing content
Chapter 4. Importing content This chapter outlines how you can import different types of custom content to Satellite. For example, you can use the following chapters for information on specific types of custom content but the underlying procedures are the same: Chapter 12, Managing ISO images Chapter 14, Managing custom file type content 4.1. Products and repositories in Satellite Both Red Hat content and custom content in Satellite have similarities: The relationship between a Product and its repositories is the same and the repositories still require synchronization. Custom products require a subscription for hosts to access, similar to subscriptions to Red Hat products. Satellite creates a subscription for each custom product you create. Red Hat content is already organized into Products. For example, Red Hat Enterprise Linux Server is a Product in Satellite. The repositories for that Product consist of different versions, architectures, and add-ons. For Red Hat repositories, Products are created automatically after enabling the repository. For more information, see Section 4.6, "Enabling Red Hat repositories" . Other content can be organized into custom products however you want. For example, you might create an EPEL (Extra Packages for Enterprise Linux) Product and add an "EPEL 7 x86_64" repository to it. For more information about creating and packaging RPMs, see the Red Hat Enterprise Linux RPM Packaging Guide . 4.2. Best practices for products and repositories Use one content type per product and content view, for example, yum content only. Make file repositories available over HTTP. If you set Protected to true, you can only download content using a global debugging certificate. Automate the creation of multiple products and repositories by using a Hammer script or an Ansible playbook . For Red Hat content, import your Red Hat manifest into Satellite. For more information, see Chapter 2, Managing Red Hat subscriptions . Avoid uploading content to repositories with an Upstream URL . Instead, create a repository to synchronize content and upload content to without setting an Upstream URL . If you upload content to a repository that already synchronizes another repository, the content might be overwritten, depending on the mirroring policy and content type. 4.3. Importing custom SSL certificates Before you synchronize custom content from an external source, you might need to import SSL certificates into your custom product. This might include client certs and keys or CA certificates for the upstream repositories you want to synchronize. If you require SSL certificates and keys to download packages, you can add them to Satellite. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Content > Content Credentials . In the Content Credentials window, click Create Content Credential . In the Name field, enter a name for your SSL certificate. From the Type list, select SSL Certificate . In the Content Credentials Content field, paste your SSL certificate, or click Browse to upload your SSL certificate. Click Save . CLI procedure Copy the SSL certificate to your Satellite Server: Or download the SSL certificate to your Satellite Server from an online source: Upload the SSL Certificate to Satellite: 4.4. Creating a custom product Create a custom product so that you can add repositories to the custom product. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Content > Products , click Create Product . In the Name field, enter a name for the product. Satellite automatically completes the Label field based on what you have entered for Name . Optional: From the GPG Key list, select the GPG key for the product. Optional: From the SSL CA Cert list, select the SSL CA certificate for the product. Optional: From the SSL Client Cert list, select the SSL client certificate for the product. Optional: From the SSL Client Key list, select the SSL client key for the product. Optional: From the Sync Plan list, select an existing sync plan or click Create Sync Plan and create a sync plan for your product requirements. In the Description field, enter a description of the product. Click Save . CLI procedure To create the product, enter the following command: 4.5. Adding custom RPM repositories Use this procedure to add custom RPM repositories in Satellite. To use the CLI instead of the Satellite web UI, see the CLI procedure . The Products window in the Satellite web UI also provides a Repo Discovery function that finds all repositories from a URL and you can select which ones to add to your custom product. For example, you can use the Repo Discovery to search https://download.postgresql.org/pub/repos/yum/16/redhat/ and list all repositories for different Red Hat Enterprise Linux versions and architectures. This helps users save time importing multiple repositories from a single source. Support for custom RPMs Red Hat does not support the upstream RPMs directly from third-party sites. These RPMs are used to demonstrate the synchronization process. For any issues with these RPMs, contact the third-party developers. Procedure In the Satellite web UI, navigate to Content > Products and select the product that you want to use, and then click New Repository . In the Name field, enter a name for the repository. Satellite automatically completes the Label field based on what you have entered for Name . Optional: In the Description field, enter a description for the repository. From the Type list, select yum as type of repository. Optional: From the Restrict to Architecture list, select an architecture. If you want to make the repository available to all hosts regardless of the architecture, ensure to select No restriction . Optional: From the Restrict to OS Version list, select the OS version. If you want to make the repository available to all hosts regardless of the OS version, ensure to select No restriction . Optional: In the Upstream URL field, enter the URL of the external repository to use as a source. Satellite supports three protocols: http:// , https:// , and file:// . If you are using a file:// repository, you have to place it under /var/lib/pulp/sync_imports/ directory. If you do not enter an upstream URL, you can manually upload packages. Optional: Check the Ignore SRPMs checkbox to exclude source RPM packages from being synchronized to Satellite. Optional: Check the Ignore treeinfo checkbox if you receive the error Treeinfo file should have INI format . All files related to Kickstart will be missing from the repository if treeinfo files are skipped. Select the Verify SSL checkbox if you want to verify that the upstream repository's SSL certificates are signed by a trusted CA. Optional: In the Upstream Username field, enter the user name for the upstream repository if required for authentication. Clear this field if the repository does not require authentication. Optional: In the Upstream Password field, enter the corresponding password for the upstream repository. Clear this field if the repository does not require authentication. Optional: In the Upstream Authentication Token field, provide the token of the upstream repository user for authentication. Leave this field empty if the repository does not require authentication. From the Download Policy list, select the type of synchronization Satellite Server performs. For more information, see Section 4.9, "Download policies overview" . From the Mirroring Policy list, select the type of content synchronization Satellite Server performs. For more information, see Section 4.12, "Mirroring policies overview" . Optional: In the Retain package versions field, enter the number of versions you want to retain per package. Optional: In the HTTP Proxy Policy field, select an HTTP proxy. From the Checksum list, select the checksum type for the repository. Optional: You can clear the Unprotected checkbox to require a subscription entitlement certificate for accessing this repository. By default, the repository is published through HTTP. Optional: From the GPG Key list, select the GPG key for the product. Optional: In the SSL CA Cert field, select the SSL CA Certificate for the repository. Optional: In the SSL Client cert field, select the SSL Client Certificate for the repository. Optional: In the SSL Client Key field, select the SSL Client Key for the repository. Click Save to create the repository. CLI procedure Enter the following command to create the repository: Continue to synchronize the repository . 4.6. Enabling Red Hat repositories If outside network access requires usage of an HTTP proxy, configure a default HTTP proxy for your server. For more information, see Adding a Default HTTP Proxy to Satellite . To select the repositories to synchronize, you must first identify the Product that contains the repository, and then enable that repository based on the relevant release version and base architecture. For Red Hat Enterprise Linux 8 hosts To provision Red Hat Enterprise Linux 8 hosts, you require the Red Hat Enterprise Linux 8 for x86_64 - AppStream (RPMs) and Red Hat Enterprise Linux 8 for x86_64 - BaseOS (RPMs) repositories. For Red Hat Enterprise Linux 7 hosts To provision Red Hat Enterprise Linux 7 hosts, you require the Red Hat Enterprise Linux 7 Server (RPMs) repository. The difference between associating Red Hat Enterprise Linux operating system release version with either 7Server repositories or 7. X repositories is that 7Server repositories contain all the latest updates while Red Hat Enterprise Linux 7. X repositories stop getting updates after the minor version release. Note that Kickstart repositories only have minor versions. Procedure In the Satellite web UI, navigate to Content > Red Hat Repositories . To find repositories, either enter the repository name, or toggle the Recommended Repositories button to the on position to view a list of repositories that you require. In the Available Repositories pane, click a repository to expand the repository set. Click the Enable icon to the base architecture and release version that you want. CLI procedure To search for your Product, enter the following command: List the repository set for the Product: Enable the repository using either the name or ID number. Include the release version, such as 7Server , and base architecture, such as x86_64 . 4.7. Synchronizing repositories You must synchronize repositories to download content into Satellite. You can use this procedure for an initial synchronization of repositories or to synchronize repositories manually as you need. You can also sync all repositories in an organization. For more information, see Section 4.8, "Synchronizing all repositories in an organization" . Create a sync plan to ensure updates on a regular basis. For more information, see Section 4.23, "Creating a sync plan" . The synchronization duration depends on the size of each repository and the speed of your network connection. The following table provides estimates of how long it would take to synchronize content, depending on the available Internet bandwidth: Single Package (10Mb) Minor Release (750Mb) Major Release (6Gb) 256 Kbps 5 Mins 27 Secs 6 Hrs 49 Mins 36 Secs 2 Days 7 Hrs 55 Mins 512 Kbps 2 Mins 43.84 Secs 3 Hrs 24 Mins 48 Secs 1 Day 3 Hrs 57 Mins T1 (1.5 Mbps) 54.33 Secs 1 Hr 7 Mins 54.78 Secs 9 Hrs 16 Mins 20.57 Secs 10 Mbps 8.39 Secs 10 Mins 29.15 Secs 1 Hr 25 Mins 53.96 Secs 100 Mbps 0.84 Secs 1 Min 2.91 Secs 8 Mins 35.4 Secs 1000 Mbps 0.08 Secs 6.29 Secs 51.54 Secs Procedure In the Satellite web UI, navigate to Content > Products and select the Product that contains the repositories that you want to synchronize. Select the repositories that you want to synchronize and click Sync Now . Optional: To view the progress of the synchronization in the Satellite web UI, navigate to Content > Sync Status and expand the corresponding Product or repository tree. CLI procedure Synchronize an entire Product: Synchronize an individual repository: 4.8. Synchronizing all repositories in an organization Use this procedure to synchronize all repositories within an organization. Procedure Log in to your Satellite Server using SSH. Run the following Bash script: ORG=" My_Organization " for i in USD(hammer --no-headers --csv repository list --organization USDORG --fields Id) do hammer repository synchronize --id USD{i} --organization USDORG --async done 4.9. Download policies overview Red Hat Satellite provides multiple download policies for synchronizing RPM content. For example, you might want to download only the content metadata while deferring the actual content download for later. Satellite Server has the following policies: Immediate Satellite Server downloads all metadata and packages during synchronization. On Demand Satellite Server downloads only the metadata during synchronization. Satellite Server only fetches and stores packages on the file system when Capsules or directly connected clients request them. This setting has no effect if you set a corresponding repository on a Capsule to Immediate because Satellite Server is forced to download all the packages. The On Demand policy acts as a Lazy Synchronization feature because they save time synchronizing content. The lazy synchronization feature must be used only for Yum repositories. You can add the packages to content views and promote to lifecycle environments as normal. Capsule Server has the following policies: Immediate Capsule Server downloads all metadata and packages during synchronization. Do not use this setting if the corresponding repository on Satellite Server is set to On Demand as Satellite Server is forced to download all the packages. On Demand Capsule Server only downloads the metadata during synchronization. Capsule Server fetches and stores packages only on the file system when directly connected clients request them. When you use an On Demand download policy, content is downloaded from Satellite Server if it is not available on Capsule Server. Inherit Capsule Server inherits the download policy for the repository from the corresponding repository on Satellite Server. Streamed Download Policy Streamed Download Policy for Capsules permits Capsules to avoid caching any content. When content is requested from the Capsule, it functions as a proxy and requests the content directly from the Satellite. 4.10. Changing the default download policy You can set the default download policy that Satellite applies to repositories that you create in all organizations. Depending on whether it is a Red Hat or non-Red Hat custom repository, Satellite uses separate settings. Changing the default value does not change existing settings. Procedure In the Satellite web UI, navigate to Administer > Settings . Click the Content tab. Change the default download policy depending on your requirements: To change the default download policy for a Red Hat repository, change the value of the Default Red Hat Repository download policy setting. To change the default download policy for a custom repository, change the value of the Default Custom Repository download policy setting. CLI procedure To change the default download policy for Red Hat repositories to one of immediate or on_demand , enter the following command: To change the default download policy for a non-Red Hat custom repository to one of immediate or on_demand , enter the following command: 4.11. Changing the download policy for a repository You can set the download policy for a repository. Procedure In the Satellite web UI, navigate to Content > Products . Select the required product name. On the Repositories tab, click the required repository name, locate the Download Policy field, and click the edit icon. From the list, select the required download policy and then click Save . CLI procedure List the repositories for an organization: Change the download policy for a repository to immediate or on_demand : 4.12. Mirroring policies overview Mirroring keeps the local repository exactly in synchronization with the upstream repository. If any content is removed from the upstream repository since the last synchronization, with the synchronization, it will be removed from the local repository as well. You can use mirroring policies for finer control over mirroring of repodata and content when synchronizing a repository. For example, if it is not possible to mirror the repodata for a repository, you can set the mirroring policy to mirror only content for this repository. Satellite Server has the following mirroring policies: Additive Neither the content nor the repodata is mirrored. Thus, only new content added since the last synchronization is added to the local repository and nothing is removed. Content Only Mirrors only content and not the repodata. Some repositories do not support metadata mirroring, in such cases you can set the mirroring policy to content only to only mirror the content. Complete Mirroring Mirrors content as well as repodata. This is the fastest method. This mirroring policy is only available for Yum content. Warning Avoid republishing metadata for repositories with Complete Mirror mirroring policy. This also applies to content views containing repositories with the Complete Mirror mirroring policy. 4.13. Changing the mirroring policy for a repository You can set the mirroring policy for a repository. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Content > Products . Select the product name. On the Repositories tab, click the repository name, locate the Mirroring Policy field, and click the edit icon. From the list, select a mirroring policy and click Save . CLI procedure List the repositories for an organization: Change the mirroring policy for a repository to additive , mirror_complete , or mirror_content_only : 4.14. Uploading content to custom RPM repositories You can upload individual RPMs and source RPMs to custom RPM repositories. You can upload RPMs using the Satellite web UI or the Hammer CLI. You must use the Hammer CLI to upload source RPMs. Procedure In the Satellite web UI, navigate to Content > Products . Click the name of the custom product. In the Repositories tab, click the name of the custom RPM repository. Under Upload Package , click Browse... and select the RPM you want to upload. Click Upload . To view all RPMs in this repository, click the number to Packages under Content Counts . CLI procedure Enter the following command to upload an RPM: Enter the following command to upload a source RPM: When the upload is complete, you can view information about a source RPM by using the commands hammer srpm list and hammer srpm info --id srpm_ID . 4.15. Refreshing content counts on Capsule If your Capsules have synchronized content enabled, you can refresh the number of content counts available to the environments associated with the Capsule. This displays the content views inside those environments available to the Capsule. You can then expand the content view to view the repositories associated with that content view version. Procedure In the Satellite web UI, navigate to Infrastructure > Capsules , and select the Capsule where you want to see the synchronized content. Select the Overview tab. Under Content Sync , toggle the Synchronize button to do an Optimized Sync or a Complete Sync to synchronize the Capsule which refreshes the content counts. Select the Content tab. Choose an Environment to view content views available to those Capsules by clicking > . Expand the content view by clicking > to view repositories available to the content view and the specific version for the environment. View the number of content counts under Packages specific to yum repositories. View the number of errata, package groups, files, container tags, container manifests, and Ansible collections under Additional content . Click the vertical ellipsis in the column to the right to the environment and click Refresh counts to refresh the content counts synchronized on the Capsule under Packages . 4.16. Configuring SELinux to permit content synchronization on custom ports SELinux permits access of Satellite for content synchronization only on specific ports. By default, connecting to web servers running on the following ports is permitted: 80, 81, 443, 488, 8008, 8009, 8443, and 9000. Procedure On Satellite, to verify the ports that are permitted by SELinux for content synchronization, enter a command as follows: To configure SELinux to permit a port for content synchronization, for example 10011, enter a command as follows: 4.17. Recovering a corrupted repository In case of repository corruption, you can recover it by using an advanced synchronization, which has three options: Optimized Sync Synchronizes the repository bypassing packages that have no detected differences from the upstream packages. Complete Sync Synchronizes all packages regardless of detected changes. Use this option if specific packages could not be downloaded to the local repository even though they exist in the upstream repository. Verify Content Checksum Synchronizes all packages and then verifies the checksum of all packages locally. If the checksum of an RPM differs from the upstream, it re-downloads the RPM. This option is relevant only for Yum content. Use this option if you have one of the following errors: Specific packages cause a 404 error while synchronizing with yum . Package does not match intended download error, which means that specific packages are corrupted. Procedure In the Satellite web UI, navigate to Content > Products . Select the product containing the corrupted repository. Select the name of a repository you want to synchronize. To perform optimized sync or complete sync, select Advanced Sync from the Select Action menu. Select the required option and click Sync . Optional: To verify the checksum, click Verify Content Checksum from the Select Action menu. CLI procedure Obtain a list of repository IDs: Synchronize a corrupted repository using the necessary option: For the optimized synchronization: For the complete synchronization: For the validate content synchronization: 4.18. Republishing repository metadata You can republish repository metadata when a repository distribution does not have the content that should be distributed based on the contents of the repository. Use this procedure with caution. Red Hat recommends a complete repository sync or publishing a new content view version to repair broken metadata. Procedure In the Satellite web UI, navigate to Content > Products . Select the product that includes the repository for which you want to republish metadata. On the Repositories tab, select a repository. To republish metadata for the repository, click Republish Repository Metadata from the Select Action menu. Note This action is not available for repositories that use the Complete Mirroring policy because the metadata is copied verbatim from the upstream source of the repository. 4.19. Republishing content view metadata Use this procedure to republish content view metadata. Procedure In the Satellite web UI, navigate to Content > Lifecycle > Content Views . Select a content view. On the Versions tab, select a content view version. To republish metadata for the content view version, click Republish repository metadata from the vertical ellipsis icon. Republishing repository metadata will regenerate metadata for all repositories in the content view version that do not adhere to the Complete Mirroring policy. 4.20. Adding an HTTP proxy Use this procedure to add HTTP proxies to Satellite. You can then specify which HTTP proxy to use for Products, repositories, and supported compute resources. Prerequisites Your HTTP proxy must allow access to the following hosts: Host name Port Protocol subscription.rhsm.redhat.com 443 HTTPS cdn.redhat.com 443 HTTPS *.akamaiedge.net 443 HTTPS cert.console.redhat.com (if using Red Hat Insights) 443 HTTPS api.access.redhat.com (if using Red Hat Insights) 443 HTTPS cert-api.access.redhat.com (if using Red Hat Insights) 443 HTTPS If Satellite Server uses a proxy to communicate with subscription.rhsm.redhat.com and cdn.redhat.com then the proxy must not perform SSL inspection on these communications. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Infrastructure > HTTP Proxies and select New HTTP Proxy . In the Name field, enter a name for the HTTP proxy. In the URL field, enter the URL for the HTTP proxy, including the port number. If your HTTP proxy requires authentication, enter a Username and Password . Optional: In the Test URL field, enter the HTTP proxy URL, then click Test Connection to ensure that you can connect to the HTTP proxy from Satellite. Click the Locations tab and add a location. Click the Organization tab and add an organization. Click Submit . CLI procedure On Satellite Server, enter the following command to add an HTTP proxy: If your HTTP proxy requires authentication, add the --username name and --password password options. For further information, see the Knowledgebase article How to access Red Hat Subscription Manager (RHSM) through a firewall or proxy on the Red Hat Customer Portal. 4.21. Changing the HTTP proxy policy for a product For granular control over network traffic, you can set an HTTP proxy policy for each Product. A Product's HTTP proxy policy applies to all repositories in the Product, unless you set a different policy for individual repositories. To set an HTTP proxy policy for individual repositories, see Section 4.22, "Changing the HTTP proxy policy for a repository" . Procedure In the Satellite web UI, navigate to Content > Products and select the checkbox to each of the Products that you want to change. From the Select Action list, select Manage HTTP Proxy . Select an HTTP Proxy Policy from the list: Global Default : Use the global default proxy setting. No HTTP Proxy : Do not use an HTTP proxy, even if a global default proxy is configured. Use specific HTTP Proxy : Select an HTTP Proxy from the list. You must add HTTP proxies to Satellite before you can select a proxy from this list. For more information, see Section 4.20, "Adding an HTTP proxy" . Click Update . 4.22. Changing the HTTP proxy policy for a repository For granular control over network traffic, you can set an HTTP proxy policy for each repository. To use the CLI instead of the Satellite web UI, see the CLI procedure . To set the same HTTP proxy policy for all repositories in a Product, see Section 4.21, "Changing the HTTP proxy policy for a product" . Procedure In the Satellite web UI, navigate to Content > Products and click the name of the Product that contains the repository. In the Repositories tab, click the name of the repository. Locate the HTTP Proxy field and click the edit icon. Select an HTTP Proxy Policy from the list: Global Default : Use the global default proxy setting. No HTTP Proxy : Do not use an HTTP proxy, even if a global default proxy is configured. Use specific HTTP Proxy : Select an HTTP Proxy from the list. You must add HTTP proxies to Satellite before you can select a proxy from this list. For more information, see Section 4.20, "Adding an HTTP proxy" . Click Save . CLI procedure On Satellite Server, enter the following command, specifying the HTTP proxy policy you want to use: Specify one of the following options for --http-proxy-policy : none : Do not use an HTTP proxy, even if a global default proxy is configured. global_default_http_proxy : Use the global default proxy setting. use_selected_http_proxy : Specify an HTTP proxy using either --http-proxy My_HTTP_Proxy_Name or --http-proxy-id My_HTTP_Proxy_ID . To add a new HTTP proxy to Satellite, see Section 4.20, "Adding an HTTP proxy" . 4.23. Creating a sync plan A sync plan checks and updates the content at a scheduled date and time. In Satellite, you can create a sync plan and assign products to the plan. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Content > Sync Plans and click New Sync Plan . In the Name field, enter a name for the plan. Optional: In the Description field, enter a description of the plan. From the Interval list, select the interval at which you want the plan to run. From the Start Date and Start Time lists, select when to start running the synchronization plan. Click Save . CLI procedure To create the synchronization plan, enter the following command: View the available sync plans for an organization to verify that the sync plan has been created: 4.24. Assigning a sync plan to a product A sync plan checks and updates the content at a scheduled date and time. In Satellite, you can assign a sync plan to products to update content regularly. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Content > Products . Select a product. On the Details tab, select a Sync Plan from the drop down menu. CLI procedure Assign a sync plan to a product: 4.25. Assigning a sync plan to multiple products Use this procedure to assign a sync plan to the products in an organization that have been synchronized at least once and contain at least one repository. Procedure Run the following Bash script: ORG=" My_Organization " SYNC_PLAN="daily_sync_at_3_a.m" hammer sync-plan create --name USDSYNC_PLAN --interval daily --sync-date "2023-04-5 03:00:00" --enabled true --organization USDORG for i in USD(hammer --no-headers --csv --csv-separator="|" product list --organization USDORG --per-page 999 | grep -vi not_synced | awk -F'|' 'USD5 != "0" { print USD1}') do hammer product set-sync-plan --sync-plan USDSYNC_PLAN --organization USDORG --id USDi done After executing the script, view the products assigned to the sync plan: 4.26. Best practices for sync plans Add sync plans to products and regularly synchronize content to keep the load on Satellite low during synchronization. Synchronize content rather more often than less often. For example, setup a sync plan to synchronize content every day rather than only once a month. Automate the creation and update of sync plans by using a Hammer script or an Ansible playbook . Distribute synchronization tasks over several hours to reduce the task load by creating multiple sync plans with the Custom Cron tool. Table 4.1. Cron expression examples Cron expression Explanation 0 22 * * 1-5 every day at 22:00 from Monday to Friday 30 3 * * 6,0 at 03:30 every Saturday and Sunday 30 2 8-14 * * at 02:30 every day between the 8th and the 14th days of the month 4.27. Limiting synchronization concurrency By default, each Repository Synchronization job can fetch up to ten files at a time. This can be adjusted on a per repository basis. Increasing the limit may improve performance, but can cause the upstream server to be overloaded or start rejecting requests. If you are seeing Repository syncs fail due to the upstream servers rejecting requests, you may want to try lowering the limit. CLI procedure 4.28. Importing a custom GPG key When clients are consuming signed custom content, ensure that the clients are configured to validate the installation of packages with the appropriate GPG Key. This helps to ensure that only packages from authorized sources can be installed. Red Hat content is already configured with the appropriate GPG key and thus GPG Key management of Red Hat Repositories is not supported. To use the CLI instead of the Satellite web UI, see the CLI procedure . Prerequisites Ensure that you have a copy of the GPG key used to sign the RPM content that you want to use and manage in Satellite. Most RPM distribution providers provide their GPG Key on their website. You can also extract this manually from an RPM: Download a copy of the version specific repository package to your local machine: Extract the RPM file without installing it: The GPG key is located relative to the extraction at etc/pki/rpm-gpg/RPM-GPG-KEY- EXAMPLE-95 . Procedure In the Satellite web UI, navigate to Content > Content Credentials and in the upper-right of the window, click Create Content Credential . Enter the name of your repository and select GPG Key from the Type list. Either paste the GPG key into the Content Credential Contents field, or click Browse and select the GPG key file that you want to import. If your custom repository contains content signed by multiple GPG keys, you must enter all required GPG keys in the Content Credential Contents field with new lines between each key, for example: Click Save . CLI procedure Copy the GPG key to your Satellite Server: Upload the GPG key to Satellite: 4.29. Restricting a custom repository to a specific operating system or architecture in Satellite You can configure Satellite to make a custom repository available only on hosts with a specific operating system version or architecture. For example, you can restrict a custom repository only to Red Hat Enterprise Linux 9 hosts. Note Only restrict architecture and operating system version for custom products. Satellite applies these restrictions automatically for Red Hat repositories. Procedure In the Satellite web UI, navigate to Content > Products . Click the product that contains the repository sets you want to restrict. In the Repositories tab, click the repository you want to restrict. In the Publishing Settings section, set the following options: Set Restrict to OS version to restrict the operating system version. Set Restrict to architecture to restrict the architecture.
[ "scp My_SSL_Certificate [email protected]:~/.", "wget -P ~ http:// upstream-satellite.example.com /pub/katello-server-ca.crt", "hammer content-credential create --content-type cert --name \" My_SSL_Certificate \" --organization \" My_Organization \" --path ~/ My_SSL_Certificate", "hammer product create --name \" My_Product \" --sync-plan \" Example Plan \" --description \" Content from My Repositories \" --organization \" My_Organization \"", "hammer repository create --arch \" My_Architecture \" --content-type \"yum\" --gpg-key-id My_GPG_Key_ID --name \" My_Repository \" --organization \" My_Organization \" --os-version \" My_OS_Version \" --product \" My_Product \" --publish-via-http true --url My_Upstream_URL", "hammer product list --organization \" My_Organization \"", "hammer repository-set list --product \"Red Hat Enterprise Linux Server\" --organization \" My_Organization \"", "hammer repository-set enable --name \"Red Hat Enterprise Linux 7 Server (RPMs)\" --releasever \"7Server\" --basearch \"x86_64\" --product \"Red Hat Enterprise Linux Server\" --organization \" My_Organization \"", "hammer product synchronize --name \" My_Product \" --organization \" My_Organization \"", "hammer repository synchronize --name \" My_Repository \" --organization \" My_Organization \" --product \" My Product \"", "ORG=\" My_Organization \" for i in USD(hammer --no-headers --csv repository list --organization USDORG --fields Id) do hammer repository synchronize --id USD{i} --organization USDORG --async done", "hammer settings set --name default_redhat_download_policy --value immediate", "hammer settings set --name default_download_policy --value immediate", "hammer repository list --organization-label My_Organization_Label", "hammer repository update --download-policy immediate --name \" My_Repository \" --organization-label My_Organization_Label --product \" My_Product \"", "hammer repository list --organization-label My_Organization_Label", "hammer repository update --id 1 --mirroring-policy mirror_complete", "hammer repository upload-content --id My_Repository_ID --path /path/to/example-package.rpm", "hammer repository upload-content --content-type srpm --id My_Repository_ID --path /path/to/example-package.src.rpm", "semanage port -l | grep ^http_port_t http_port_t tcp 80, 81, 443, 488, 8008, 8009, 8443, 9000", "semanage port -a -t http_port_t -p tcp 10011", "hammer repository list --organization \" My_Organization \"", "hammer repository synchronize --id My_ID", "hammer repository synchronize --id My_ID --skip-metadata-check true", "hammer repository synchronize --id My_ID --validate-contents true", "hammer http-proxy create --name proxy-name --url proxy-URL:port-number", "hammer repository update --http-proxy-policy HTTP_Proxy_Policy --id Repository_ID", "hammer sync-plan create --description \" My_Description \" --enabled true --interval daily --name \" My_Products \" --organization \" My_Organization \" --sync-date \"2023-01-01 01:00:00\"", "hammer sync-plan list --organization \" My_Organization \"", "hammer product set-sync-plan --name \" My_Product_Name \" --organization \" My_Organization \" --sync-plan \" My_Sync_Plan_Name \"", "ORG=\" My_Organization \" SYNC_PLAN=\"daily_sync_at_3_a.m\" hammer sync-plan create --name USDSYNC_PLAN --interval daily --sync-date \"2023-04-5 03:00:00\" --enabled true --organization USDORG for i in USD(hammer --no-headers --csv --csv-separator=\"|\" product list --organization USDORG --per-page 999 | grep -vi not_synced | awk -F'|' 'USD5 != \"0\" { print USD1}') do hammer product set-sync-plan --sync-plan USDSYNC_PLAN --organization USDORG --id USDi done", "hammer product list --organization USDORG --sync-plan USDSYNC_PLAN", "hammer repository update --download-concurrency 5 --id Repository_ID --organization \" My_Organization \"", "wget http://www.example.com/9.5/example-9.5-2.noarch.rpm", "rpm2cpio example-9.5-2.noarch.rpm | cpio -idmv", "-----BEGIN PGP PUBLIC KEY BLOCK----- mQINBFy/HE4BEADttv2TCPzVrre+aJ9f5QsR6oWZMm7N5Lwxjm5x5zA9BLiPPGFN 4aTUR/g+K1S0aqCU+ZS3Rnxb+6fnBxD+COH9kMqXHi3M5UNzbp5WhCdUpISXjjpU XIFFWBPuBfyr/FKRknFH15P+9kLZLxCpVZZLsweLWCuw+JKCMmnA =F6VG -----END PGP PUBLIC KEY BLOCK----- -----BEGIN PGP PUBLIC KEY BLOCK----- mQINBFw467UBEACmREzDeK/kuScCmfJfHJa0Wgh/2fbJLLt3KSvsgDhORIptf+PP OTFDlKuLkJx99ZYG5xMnBG47C7ByoMec1j94YeXczuBbynOyyPlvduma/zf8oB9e Wl5GnzcLGAnUSRamfqGUWcyMMinHHIKIc1X1P4I= =WPpI -----END PGP PUBLIC KEY BLOCK-----", "scp ~/etc/pki/rpm-gpg/RPM-GPG-KEY- EXAMPLE-95 [email protected]:~/.", "hammer content-credentials create --content-type gpg_key --name \" My_GPG_Key \" --organization \" My_Organization \" --path ~/RPM-GPG-KEY- EXAMPLE-95" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/managing_content/Importing_Content_content-management
Chapter 9. Managing Red Hat High Availability Add-On With Command Line Tools
Chapter 9. Managing Red Hat High Availability Add-On With Command Line Tools This chapter describes various administrative tasks for managing Red Hat High Availability Add-On and consists of the following sections: Section 9.1, "Starting and Stopping the Cluster Software" Section 9.2, "Deleting or Adding a Node" Section 9.3, "Managing High-Availability Services" Section 9.4, "Updating a Configuration" Important Make sure that your deployment of Red Hat High Availability Add-On meets your needs and can be supported. Consult with an authorized Red Hat representative to verify your configuration prior to deployment. In addition, allow time for a configuration burn-in period to test failure modes. Important This chapter references commonly used cluster.conf elements and attributes. For a comprehensive list and description of cluster.conf elements and attributes, see the cluster schema at /usr/share/cluster/cluster.rng , and the annotated schema at /usr/share/doc/cman-X.Y.ZZ/cluster_conf.html (for example /usr/share/doc/cman-3.0.12/cluster_conf.html ). Important Certain procedure in this chapter call for using the cman_tool version -r command to propagate a cluster configuration throughout a cluster. Using that command requires that ricci is running. Note Procedures in this chapter, may include specific commands for some of the command-line tools listed in Appendix E, Command Line Tools Summary . For more information about all commands and variables, see the man page for each command-line tool. 9.1. Starting and Stopping the Cluster Software You can start or stop cluster software on a node according to Section 9.1.1, "Starting Cluster Software" and Section 9.1.2, "Stopping Cluster Software" . Starting cluster software on a node causes it to join the cluster; stopping the cluster software on a node causes it to leave the cluster. 9.1.1. Starting Cluster Software To start the cluster software on a node, type the following commands in this order: service cman start service clvmd start , if CLVM has been used to create clustered volumes service gfs2 start , if you are using Red Hat GFS2 service rgmanager start , if you using high-availability (HA) services ( rgmanager ). For example: 9.1.2. Stopping Cluster Software To stop the cluster software on a node, type the following commands in this order: service rgmanager stop , if you using high-availability (HA) services ( rgmanager ). service gfs2 stop , if you are using Red Hat GFS2 umount -at gfs2 , if you are using Red Hat GFS2 in conjunction with rgmanager , to ensure that any GFS2 files mounted during rgmanager startup (but not unmounted during shutdown) were also unmounted. service clvmd stop , if CLVM has been used to create clustered volumes service cman stop For example: Note Stopping cluster software on a node causes its HA services to fail over to another node. As an alternative to that, consider relocating or migrating HA services to another node before stopping cluster software. For information about managing HA services, see Section 9.3, "Managing High-Availability Services" .
[ "service cman start Starting cluster: Checking Network Manager... [ OK ] Global setup... [ OK ] Loading kernel modules... [ OK ] Mounting configfs... [ OK ] Starting cman... [ OK ] Waiting for quorum... [ OK ] Starting fenced... [ OK ] Starting dlm_controld... [ OK ] Starting gfs_controld... [ OK ] Unfencing self... [ OK ] Joining fence domain... [ OK ] service clvmd start Starting clvmd: [ OK ] Activating VG(s): 2 logical volume(s) in volume group \"vg_example\" now active [ OK ] service gfs2 start Mounting GFS2 filesystem (/mnt/gfsA): [ OK ] Mounting GFS2 filesystem (/mnt/gfsB): [ OK ] service rgmanager start Starting Cluster Service Manager: [ OK ]", "service rgmanager stop Stopping Cluster Service Manager: [ OK ] service gfs2 stop Unmounting GFS2 filesystem (/mnt/gfsA): [ OK ] Unmounting GFS2 filesystem (/mnt/gfsB): [ OK ] umount -at gfs2 service clvmd stop Signaling clvmd to exit [ OK ] clvmd terminated [ OK ] service cman stop Stopping cluster: Leaving fence domain... [ OK ] Stopping gfs_controld... [ OK ] Stopping dlm_controld... [ OK ] Stopping fenced... [ OK ] Stopping cman... [ OK ] Waiting for corosync to shutdown: [ OK ] Unloading kernel modules... [ OK ] Unmounting configfs... [ OK ]" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/ch-mgmt-cli-ca
Appendix C. Journaler configuration reference
Appendix C. Journaler configuration reference Reference of the list commands that can be used for journaler configuration. journaler_write_head_interval Description How frequently to update the journal head object. Type Integer Required No Default 15 journaler_prefetch_periods Description How many stripe periods to read ahead on journal replay. Type Integer Required No Default 10 journaler_prezero_periods Description How many stripe periods to zero ahead of write position. Type Integer Required No Default 10 journaler_batch_interval Description Maximum additional latency in seconds to incur artificially. Type Double Required No Default .001 journaler_batch_max Description Maximum bytes that will be delayed flushing. Type 64-bit Unsigned Integer Required No Default 0
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/file_system_guide/journaler-configuration-reference_fs
Chapter 7. Uninstalling a cluster on vSphere that uses installer-provisioned infrastructure
Chapter 7. Uninstalling a cluster on vSphere that uses installer-provisioned infrastructure You can remove a cluster that you deployed in your VMware vSphere instance by using installer-provisioned infrastructure. Note When you run the openshift-install destroy cluster command to uninstall OpenShift Container Platform, vSphere volumes are not automatically deleted. The cluster administrator must manually find the vSphere volumes and delete them. 7.1. Removing a cluster that uses installer-provisioned infrastructure You can remove a cluster that uses installer-provisioned infrastructure from your cloud. Note After uninstallation, check your cloud provider for any resources not removed properly, especially with User Provisioned Infrastructure (UPI) clusters. There might be resources that the installer did not create or that the installer is unable to access. Prerequisites You have a copy of the installation program that you used to deploy the cluster. You have the files that the installation program generated when you created your cluster. Procedure From the directory that contains the installation program on the computer that you used to install the cluster, run the following command: USD ./openshift-install destroy cluster \ --dir <installation_directory> --log-level info 1 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different details, specify warn , debug , or error instead of info . Note You must specify the directory that contains the cluster definition files for your cluster. The installation program requires the metadata.json file in this directory to delete the cluster. Optional: Delete the <installation_directory> directory and the OpenShift Container Platform installation program.
[ "./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/installing_on_vmware_vsphere/uninstalling-cluster-vsphere-installer-provisioned
Chapter 215. Lucene Component
Chapter 215. Lucene Component Available as of Camel version 2.2 The lucene component is based on the Apache Lucene project. Apache Lucene is a powerful high-performance, full-featured text search engine library written entirely in Java. For more details about Lucene, please see the following links http://lucene.apache.org/java/docs/ http://lucene.apache.org/java/docs/features.html The lucene component in camel facilitates integration and utilization of Lucene endpoints in enterprise integration patterns and scenarios. The lucene component does the following builds a searchable index of documents when payloads are sent to the Lucene Endpoint facilitates performing of indexed searches in Camel This component only supports producer endpoints. Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-lucene</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> 215.1. URI format lucene:searcherName:insert[?options] lucene:searcherName:query[?options] You can append query options to the URI in the following format, ?option=value&option=value&... 215.2. Insert Options The Lucene component supports 2 options, which are listed below. Name Description Default Type config (advanced) To use a shared lucene configuration LuceneConfiguration resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The Lucene endpoint is configured using URI syntax: with the following path and query parameters: 215.2.1. Path Parameters (2 parameters): Name Description Default Type host Required The URL to the lucene server String operation Required Operation to do such as insert or query. LuceneOperation 215.2.2. Query Parameters (5 parameters): Name Description Default Type analyzer (producer) An Analyzer builds TokenStreams, which analyze text. It thus represents a policy for extracting index terms from text. The value for analyzer can be any class that extends the abstract class org.apache.lucene.analysis.Analyzer. Lucene also offers a rich set of analyzers out of the box Analyzer indexDir (producer) A file system directory in which index files are created upon analysis of the document by the specified analyzer File maxHits (producer) An integer value that limits the result set of the search operation int srcDir (producer) An optional directory containing files to be used to be analyzed and added to the index at producer startup. File synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean 215.3. Spring Boot Auto-Configuration The component supports 11 options, which are listed below. Name Description Default Type camel.component.lucene.config.analyzer An Analyzer builds TokenStreams, which analyze text. It thus represents a policy for extracting index terms from text. The value for analyzer can be any class that extends the abstract class org.apache.lucene.analysis.Analyzer. Lucene also offers a rich set of analyzers out of the box Analyzer camel.component.lucene.config.authority String camel.component.lucene.config.host The URL to the lucene server String camel.component.lucene.config.index-directory A file system directory in which index files are created upon analysis of the document by the specified analyzer File camel.component.lucene.config.lucene-version Version camel.component.lucene.config.max-hits An integer value that limits the result set of the search operation Integer camel.component.lucene.config.operation Operation to do such as insert or query. LuceneOperation camel.component.lucene.config.source-directory An optional directory containing files to be used to be analyzed and added to the index at producer startup. File camel.component.lucene.config.uri URI camel.component.lucene.enabled Enable lucene component true Boolean camel.component.lucene.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean 215.4. Sending/Receiving Messages to/from the cache 215.4.1. Message Headers Header Description QUERY The Lucene Query to performed on the index. The query may include wildcards and phrases RETURN_LUCENE_DOCS Camel 2.15: Set this header to true to include the actual Lucene documentation when returning hit information. 215.4.2. Lucene Producers This component supports 2 producer endpoints. insert - The insert producer builds a searchable index by analyzing the body in incoming exchanges and associating it with a token ("content"). query - The query producer performs searches on a pre-created index. The query uses the searchable index to perform score & relevance based searches. Queries are sent via the incoming exchange contains a header property name called 'QUERY'. The value of the header property 'QUERY' is a Lucene Query. For more details on how to create Lucene Queries check out http://lucene.apache.org/java/3_0_0/queryparsersyntax.html 215.4.3. Lucene Processor There is a processor called LuceneQueryProcessor available to perform queries against lucene without the need to create a producer. 215.5. Lucene Usage Samples 215.5.1. Example 1: Creating a Lucene index RouteBuilder builder = new RouteBuilder() { public void configure() { from("direct:start"). to("lucene:whitespaceQuotesIndex:insert? analyzer=#whitespaceAnalyzer&indexDir=#whitespace&srcDir=#load_dir"). to("mock:result"); } }; 215.5.2. Example 2: Loading properties into the JNDI registry in the Camel Context @Override protected JndiRegistry createRegistry() throws Exception { JndiRegistry registry = new JndiRegistry(createJndiContext()); registry.bind("whitespace", new File("./whitespaceIndexDir")); registry.bind("load_dir", new File("src/test/resources/sources")); registry.bind("whitespaceAnalyzer", new WhitespaceAnalyzer()); return registry; } ... CamelContext context = new DefaultCamelContext(createRegistry()); 215.5.3. Example 2: Performing searches using a Query Producer RouteBuilder builder = new RouteBuilder() { public void configure() { from("direct:start"). setHeader("QUERY", constant("Seinfeld")). to("lucene:searchIndex:query? analyzer=#whitespaceAnalyzer&indexDir=#whitespace&maxHits=20"). to("direct:"); from("direct:").process(new Processor() { public void process(Exchange exchange) throws Exception { Hits hits = exchange.getIn().getBody(Hits.class); printResults(hits); } private void printResults(Hits hits) { LOG.debug("Number of hits: " + hits.getNumberOfHits()); for (int i = 0; i < hits.getNumberOfHits(); i++) { LOG.debug("Hit " + i + " Index Location:" + hits.getHit().get(i).getHitLocation()); LOG.debug("Hit " + i + " Score:" + hits.getHit().get(i).getScore()); LOG.debug("Hit " + i + " Data:" + hits.getHit().get(i).getData()); } } }).to("mock:searchResult"); } }; 215.5.4. Example 3: Performing searches using a Query Processor RouteBuilder builder = new RouteBuilder() { public void configure() { try { from("direct:start"). setHeader("QUERY", constant("Rodney Dangerfield")). process(new LuceneQueryProcessor("target/stdindexDir", analyzer, null, 20)). to("direct:"); } catch (Exception e) { e.printStackTrace(); } from("direct:").process(new Processor() { public void process(Exchange exchange) throws Exception { Hits hits = exchange.getIn().getBody(Hits.class); printResults(hits); } private void printResults(Hits hits) { LOG.debug("Number of hits: " + hits.getNumberOfHits()); for (int i = 0; i < hits.getNumberOfHits(); i++) { LOG.debug("Hit " + i + " Index Location:" + hits.getHit().get(i).getHitLocation()); LOG.debug("Hit " + i + " Score:" + hits.getHit().get(i).getScore()); LOG.debug("Hit " + i + " Data:" + hits.getHit().get(i).getData()); } } }).to("mock:searchResult"); } };
[ "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-lucene</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>", "lucene:searcherName:insert[?options] lucene:searcherName:query[?options]", "lucene:host:operation", "RouteBuilder builder = new RouteBuilder() { public void configure() { from(\"direct:start\"). to(\"lucene:whitespaceQuotesIndex:insert? analyzer=#whitespaceAnalyzer&indexDir=#whitespace&srcDir=#load_dir\"). to(\"mock:result\"); } };", "@Override protected JndiRegistry createRegistry() throws Exception { JndiRegistry registry = new JndiRegistry(createJndiContext()); registry.bind(\"whitespace\", new File(\"./whitespaceIndexDir\")); registry.bind(\"load_dir\", new File(\"src/test/resources/sources\")); registry.bind(\"whitespaceAnalyzer\", new WhitespaceAnalyzer()); return registry; } CamelContext context = new DefaultCamelContext(createRegistry());", "RouteBuilder builder = new RouteBuilder() { public void configure() { from(\"direct:start\"). setHeader(\"QUERY\", constant(\"Seinfeld\")). to(\"lucene:searchIndex:query? analyzer=#whitespaceAnalyzer&indexDir=#whitespace&maxHits=20\"). to(\"direct:next\"); from(\"direct:next\").process(new Processor() { public void process(Exchange exchange) throws Exception { Hits hits = exchange.getIn().getBody(Hits.class); printResults(hits); } private void printResults(Hits hits) { LOG.debug(\"Number of hits: \" + hits.getNumberOfHits()); for (int i = 0; i < hits.getNumberOfHits(); i++) { LOG.debug(\"Hit \" + i + \" Index Location:\" + hits.getHit().get(i).getHitLocation()); LOG.debug(\"Hit \" + i + \" Score:\" + hits.getHit().get(i).getScore()); LOG.debug(\"Hit \" + i + \" Data:\" + hits.getHit().get(i).getData()); } } }).to(\"mock:searchResult\"); } };", "RouteBuilder builder = new RouteBuilder() { public void configure() { try { from(\"direct:start\"). setHeader(\"QUERY\", constant(\"Rodney Dangerfield\")). process(new LuceneQueryProcessor(\"target/stdindexDir\", analyzer, null, 20)). to(\"direct:next\"); } catch (Exception e) { e.printStackTrace(); } from(\"direct:next\").process(new Processor() { public void process(Exchange exchange) throws Exception { Hits hits = exchange.getIn().getBody(Hits.class); printResults(hits); } private void printResults(Hits hits) { LOG.debug(\"Number of hits: \" + hits.getNumberOfHits()); for (int i = 0; i < hits.getNumberOfHits(); i++) { LOG.debug(\"Hit \" + i + \" Index Location:\" + hits.getHit().get(i).getHitLocation()); LOG.debug(\"Hit \" + i + \" Score:\" + hits.getHit().get(i).getScore()); LOG.debug(\"Hit \" + i + \" Data:\" + hits.getHit().get(i).getData()); } } }).to(\"mock:searchResult\"); } };" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/lucene-component
Chapter 1. Role APIs
Chapter 1. Role APIs 1.1. ClusterRoleBinding [authorization.openshift.io/v1] Description ClusterRoleBinding references a ClusterRole, but not contain it. It can reference any ClusterRole in the same namespace or in the global namespace. It adds who information via (Users and Groups) OR Subjects and namespace information by which namespace it exists in. ClusterRoleBindings in a given namespace only have effect in that namespace (excepting the master namespace which has power in all namespaces). Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.2. ClusterRole [authorization.openshift.io/v1] Description ClusterRole is a logical grouping of PolicyRules that can be referenced as a unit by ClusterRoleBindings. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.3. RoleBindingRestriction [authorization.openshift.io/v1] Description RoleBindingRestriction is an object that can be matched against a subject (user, group, or service account) to determine whether rolebindings on that subject are allowed in the namespace to which the RoleBindingRestriction belongs. If any one of those RoleBindingRestriction objects matches a subject, rolebindings on that subject in the namespace are allowed. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.4. RoleBinding [authorization.openshift.io/v1] Description RoleBinding references a Role, but not contain it. It can reference any Role in the same namespace or in the global namespace. It adds who information via (Users and Groups) OR Subjects and namespace information by which namespace it exists in. RoleBindings in a given namespace only have effect in that namespace (excepting the master namespace which has power in all namespaces). Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.5. Role [authorization.openshift.io/v1] Description Role is a logical grouping of PolicyRules that can be referenced as a unit by RoleBindings. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/role_apis/role-apis
15.5.8. Network Options
15.5.8. Network Options The following lists directives which affect how vsftpd interacts with the network. accept_timeout - Specifies the amount of time for a client using passive mode to establish a connection. The default value is 60 . anon_max_rate - Specifies the maximum data transfer rate for anonymous users in bytes per second. The default value is 0 , which does not limit the transfer rate. connect_from_port_20 When enabled, vsftpd runs with enough privileges to open port 20 on the server during active mode data transfers. Disabling this option allows vsftpd to run with less privileges, but may be incompatible with some FTP clients. The default value is NO . Note, in Red Hat Enterprise Linux, the value is set to YES . connect_timeout - Specifies the maximum amount of time a client using active mode has to respond to a data connection, in seconds. The default value is 60 . data_connection_timeout - Specifies maximum amount of time data transfers are allowed to stall, in seconds. Once triggered, the connection to the remote client is closed. The default value is 300 . ftp_data_port - Specifies the port used for active data connections when connect_from_port_20 is set to YES . The default value is 20 . idle_session_timeout - Specifies the maximum amount of time between commands from a remote client. Once triggered, the connection to the remote client is closed. The default value is 300 . listen_address - Specifies the IP address on which vsftpd listens for network connections. There is no default value for this directive. Note If running multiple copies of vsftpd serving different IP addresses, the configuration file for each copy of the vsftpd daemon must have a different value for this directive. Refer to Section 15.4.1, "Starting Multiple Copies of vsftpd " for more information about multihomed FTP servers. listen_address6 - Specifies the IPv6 address on which vsftpd listens for network connections when listen_ipv6 is set to YES . There is no default value for this directive. Note If running multiple copies of vsftpd serving different IP addresses, the configuration file for each copy of the vsftpd daemon must have a different value for this directive. Refer to Section 15.4.1, "Starting Multiple Copies of vsftpd " for more information about multihomed FTP servers. listen_port - Specifies the port on which vsftpd listens for network connections. The default value is 21 . local_max_rate - Specifies the maximum rate data is transfered for local users logged into the server in bytes per second. The default value is 0 , which does not limit the transfer rate. max_clients - Specifies the maximum number of simultaneous clients allowed to connect to the server when it is running in standalone mode. Any additional client connections would result in an error message. The default value is 0 , which does not limit connections. max_per_ip - Specifies the maximum of clients allowed to connected from the same source IP address. The default value is 0 , which does not limit connections. pasv_address - Specifies the IP address for the public facing IP address of the server for servers behind Network Address Translation (NAT) firewalls. This enables vsftpd to hand out the correct return address for passive mode connections. There is no default value for this directive. pasv_enable - When enabled, passive mode connects are allowed. The default value is YES . pasv_max_port - Specifies the highest possible port sent to the FTP clients for passive mode connections. This setting is used to limit the port range so that firewall rules are easier to create. The default value is 0 , which does not limit the highest passive port range. The value must not exceed 65535 . pasv_min_port - Specifies the lowest possible port sent to the FTP clients for passive mode connections. This setting is used to limit the port range so that firewall rules are easier to create. The default value is 0 , which does not limit the lowest passive port range. The value must not be lower 1024 . pasv_promiscuous - When enabled, data connections are not checked to make sure they are originating from the same IP address. This setting is only useful for certain types of tunneling. Warning Do not enable this option unless absolutely necessary as it disables an important security feature which verifies that passive mode connections originate from the same IP address as the control connection that initiates the data transfer. The default value is NO . port_enable - When enabled, active mode connects are allowed. The default value is YES .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-ftp-vsftpd-conf-opt-net
Chapter 1. Overview of nodes
Chapter 1. Overview of nodes 1.1. About nodes A node is a virtual or bare-metal machine in a Kubernetes cluster. Worker nodes host your application containers, grouped as pods. The control plane nodes run services that are required to control the Kubernetes cluster. In OpenShift Container Platform, the control plane nodes contain more than just the Kubernetes services for managing the OpenShift Container Platform cluster. Having stable and healthy nodes in a cluster is fundamental to the smooth functioning of your hosted application. In OpenShift Container Platform, you can access, manage, and monitor a node through the Node object representing the node. Using the OpenShift CLI ( oc ) or the web console, you can perform the following operations on a node. The following components of a node are responsible for maintaining the running of pods and providing the Kubernetes runtime environment. Container runtime The container runtime is responsible for running containers. Kubernetes offers several runtimes such as containerd, cri-o, rktlet, and Docker. Kubelet Kubelet runs on nodes and reads the container manifests. It ensures that the defined containers have started and are running. The kubelet process maintains the state of work and the node server. Kubelet manages network rules and port forwarding. The kubelet manages containers that are created by Kubernetes only. Kube-proxy Kube-proxy runs on every node in the cluster and maintains the network traffic between the Kubernetes resources. A Kube-proxy ensures that the networking environment is isolated and accessible. DNS Cluster DNS is a DNS server which serves DNS records for Kubernetes services. Containers started by Kubernetes automatically include this DNS server in their DNS searches. Read operations The read operations allow an administrator or a developer to get information about nodes in an OpenShift Container Platform cluster. List all the nodes in a cluster . Get information about a node, such as memory and CPU usage, health, status, and age. List pods running on a node . Management operations As an administrator, you can easily manage a node in an OpenShift Container Platform cluster through several tasks: Add or update node labels . A label is a key-value pair applied to a Node object. You can control the scheduling of pods using labels. Change node configuration using a custom resource definition (CRD), or the kubeletConfig object. Configure nodes to allow or disallow the scheduling of pods. Healthy worker nodes with a Ready status allow pod placement by default while the control plane nodes do not; you can change this default behavior by configuring the worker nodes to be unschedulable and the control plane nodes to be schedulable . Allocate resources for nodes using the system-reserved setting. You can allow OpenShift Container Platform to automatically determine the optimal system-reserved CPU and memory resources for your nodes, or you can manually determine and set the best resources for your nodes. Configure the number of pods that can run on a node based on the number of processor cores on the node, a hard limit, or both. Reboot a node gracefully using pod anti-affinity . Delete a node from a cluster by scaling down the cluster using a compute machine set. To delete a node from a bare-metal cluster, you must first drain all pods on the node and then manually delete the node. Enhancement operations OpenShift Container Platform allows you to do more than just access and manage nodes; as an administrator, you can perform the following tasks on nodes to make the cluster more efficient, application-friendly, and to provide a better environment for your developers. Manage node-level tuning for high-performance applications that require some level of kernel tuning by using the Node Tuning Operator . Enable TLS security profiles on the node to protect communication between the kubelet and the Kubernetes API server. Run background tasks on nodes automatically with daemon sets . You can create and use daemon sets to create shared storage, run a logging pod on every node, or deploy a monitoring agent on all nodes. Free node resources using garbage collection . You can ensure that your nodes are running efficiently by removing terminated containers and the images not referenced by any running pods. Add kernel arguments to a set of nodes . Configure an OpenShift Container Platform cluster to have worker nodes at the network edge (remote worker nodes). For information on the challenges of having remote worker nodes in an OpenShift Container Platform cluster and some recommended approaches for managing pods on a remote worker node, see Using remote worker nodes at the network edge . 1.2. About pods A pod is one or more containers deployed together on a node. As a cluster administrator, you can define a pod, assign it to run on a healthy node that is ready for scheduling, and manage. A pod runs as long as the containers are running. You cannot change a pod once it is defined and is running. Some operations you can perform when working with pods are: Read operations As an administrator, you can get information about pods in a project through the following tasks: List pods associated with a project , including information such as the number of replicas and restarts, current status, and age. View pod usage statistics such as CPU, memory, and storage consumption. Management operations The following list of tasks provides an overview of how an administrator can manage pods in an OpenShift Container Platform cluster. Control scheduling of pods using the advanced scheduling features available in OpenShift Container Platform: Node-to-pod binding rules such as pod affinity , node affinity , and anti-affinity . Node labels and selectors . Taints and tolerations . Pod topology spread constraints . Secondary scheduling . Configure the descheduler to evict pods based on specific strategies so that the scheduler reschedules the pods to more appropriate nodes. Configure how pods behave after a restart using pod controllers and restart policies . Limit both egress and ingress traffic on a pod . Add and remove volumes to and from any object that has a pod template . A volume is a mounted file system available to all the containers in a pod. Container storage is ephemeral; you can use volumes to persist container data. Enhancement operations You can work with pods more easily and efficiently with the help of various tools and features available in OpenShift Container Platform. The following operations involve using those tools and features to better manage pods. Operation User More information Create and use a horizontal pod autoscaler. Developer You can use a horizontal pod autoscaler to specify the minimum and the maximum number of pods you want to run, as well as the CPU utilization or memory utilization your pods should target. Using a horizontal pod autoscaler, you can automatically scale pods . Install and use a vertical pod autoscaler . Administrator and developer As an administrator, use a vertical pod autoscaler to better use cluster resources by monitoring the resources and the resource requirements of workloads. As a developer, use a vertical pod autoscaler to ensure your pods stay up during periods of high demand by scheduling pods to nodes that have enough resources for each pod. Provide access to external resources using device plugins. Administrator A device plugin is a gRPC service running on nodes (external to the kubelet), which manages specific hardware resources. You can deploy a device plugin to provide a consistent and portable solution to consume hardware devices across clusters. Provide sensitive data to pods using the Secret object . Administrator Some applications need sensitive information, such as passwords and usernames. You can use the Secret object to provide such information to an application pod. 1.3. About containers A container is the basic unit of an OpenShift Container Platform application, which comprises the application code packaged along with its dependencies, libraries, and binaries. Containers provide consistency across environments and multiple deployment targets: physical servers, virtual machines (VMs), and private or public cloud. Linux container technologies are lightweight mechanisms for isolating running processes and limiting access to only designated resources. As an administrator, You can perform various tasks on a Linux container, such as: Copy files to and from a container . Allow containers to consume API objects . Execute remote commands in a container . Use port forwarding to access applications in a container . OpenShift Container Platform provides specialized containers called Init containers . Init containers run before application containers and can contain utilities or setup scripts not present in an application image. You can use an Init container to perform tasks before the rest of a pod is deployed. Apart from performing specific tasks on nodes, pods, and containers, you can work with the overall OpenShift Container Platform cluster to keep the cluster efficient and the application pods highly available. 1.4. About autoscaling pods on a node OpenShift Container Platform offers three tools that you can use to automatically scale the number of pods on your nodes and the resources allocated to pods. Horizontal Pod Autoscaler The Horizontal Pod Autoscaler (HPA) can automatically increase or decrease the scale of a replication controller or deployment configuration, based on metrics collected from the pods that belong to that replication controller or deployment configuration. For more information, see Automatically scaling pods with the horizontal pod autoscaler . Custom Metrics Autoscaler The Custom Metrics Autoscaler can automatically increase or decrease the number of pods for a deployment, stateful set, custom resource, or job based on custom metrics that are not based only on CPU or memory. For more information, see Custom Metrics Autoscaler Operator overview . Vertical Pod Autoscaler The Vertical Pod Autoscaler (VPA) can automatically review the historic and current CPU and memory resources for containers in pods and can update the resource limits and requests based on the usage values it learns. For more information, see Automatically adjust pod resource levels with the vertical pod autoscaler . 1.5. Glossary of common terms for OpenShift Container Platform nodes This glossary defines common terms that are used in the node content. Container It is a lightweight and executable image that comprises software and all its dependencies. Containers virtualize the operating system, as a result, you can run containers anywhere from a data center to a public or private cloud to even a developer's laptop. Daemon set Ensures that a replica of the pod runs on eligible nodes in an OpenShift Container Platform cluster. egress The process of data sharing externally through a network's outbound traffic from a pod. garbage collection The process of cleaning up cluster resources, such as terminated containers and images that are not referenced by any running pods. Horizontal Pod Autoscaler(HPA) Implemented as a Kubernetes API resource and a controller. You can use the HPA to specify the minimum and maximum number of pods that you want to run. You can also specify the CPU or memory utilization that your pods should target. The HPA scales out and scales in pods when a given CPU or memory threshold is crossed. Ingress Incoming traffic to a pod. Job A process that runs to completion. A job creates one or more pod objects and ensures that the specified pods are successfully completed. Labels You can use labels, which are key-value pairs, to organise and select subsets of objects, such as a pod. Node A worker machine in the OpenShift Container Platform cluster. A node can be either be a virtual machine (VM) or a physical machine. Node Tuning Operator You can use the Node Tuning Operator to manage node-level tuning by using the TuneD daemon. It ensures custom tuning specifications are passed to all containerized TuneD daemons running in the cluster in the format that the daemons understand. The daemons run on all nodes in the cluster, one per node. Self Node Remediation Operator The Operator runs on the cluster nodes and identifies and reboots nodes that are unhealthy. Pod One or more containers with shared resources, such as volume and IP addresses, running in your OpenShift Container Platform cluster. A pod is the smallest compute unit defined, deployed, and managed. Toleration Indicates that the pod is allowed (but not required) to be scheduled on nodes or node groups with matching taints. You can use tolerations to enable the scheduler to schedule pods with matching taints. Taint A core object that comprises a key,value, and effect. Taints and tolerations work together to ensure that pods are not scheduled on irrelevant nodes.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/nodes/overview-of-nodes
Logging configuration
Logging configuration Red Hat build of Quarkus 3.8 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.8/html/logging_configuration/index
11.8. Shrinking Volumes
11.8. Shrinking Volumes You can shrink volumes while the trusted storage pool is online and available. For example, you may need to remove a brick that has become inaccessible in a distributed volume because of a hardware or network failure. When shrinking distributed replicated volumes, the number of bricks being removed must be a multiple of the replica count. For example, to shrink a distributed replicated volume with a replica count of 3, you need to remove bricks in multiples of 3 (such as 6, 9, 12, etc.). In addition, the bricks you are removing must be from the same sub-volume (the same replica set). In a non-replicated volume, all bricks must be available in order to migrate data and perform the remove brick operation. In a replicated or arbitrated volume, at least one of the data bricks in the replica set must be available. The guidelines are identical when removing a distribution set from a distributed replicated volume with arbiter bricks. If you want to reduce the replica count of an arbitrated distributed replicated volume to replica 3, you must remove only the arbiter bricks. If you want to reduce a volume from arbitrated distributed replicated to distributed only, remove the arbiter brick and one replica brick from each replica subvolume. Shrinking a Volume Remove a brick using the following command: For example: Note If the remove-brick command is run with force or without any option, the data on the brick that you are removing will no longer be accessible at the glusterFS mount point. When using the start option, the data is migrated to other bricks, and on a successful commit the removed brick's information is deleted from the volume configuration. Data can still be accessed directly on the brick. You can view the status of the remove brick operation using the following command: For example: When the data migration shown in the status command is complete, run the following command to commit the brick removal: For example, After the brick removal, you can check the volume information using the following command: The command displays information similar to the following: 11.8.1. Shrinking a Geo-replicated Volume Remove a brick using the following command: For example: Note If the remove-brick command is run with force or without any option, the data on the brick that you are removing will no longer be accessible at the glusterFS mount point. When using the start option, the data is migrated to other bricks, and on a successful commit the removed brick's information is deleted from the volume configuration. Data can still be accessed directly on the brick. Use geo-replication config checkpoint to ensure that all the data in that brick is synced to the slave. Set a checkpoint to help verify the status of the data synchronization. Verify the checkpoint completion for the geo-replication session using the following command: You can view the status of the remove brick operation using the following command: For example: Stop the geo-replication session between the master and the slave: When the data migration shown in the status command is complete, run the following command to commit the brick removal: For example, After the brick removal, you can check the volume information using the following command: Start the geo-replication session between the hosts: 11.8.2. Shrinking a Tiered Volume Warning Tiering is considered deprecated as of Red Hat Gluster Storage 3.5. Red Hat no longer recommends its use, and does not support tiering in new deployments and existing deployments that upgrade to Red Hat Gluster Storage 3.5.3. You can shrink a tiered volume while the trusted storage pool is online and available. For example, you may need to remove a brick that has become inaccessible because of a hardware or network failure. 11.8.2.1. Shrinking a Cold Tier Volume Detach the tier by performing the steps listed in Section 16.7, "Detaching a Tier from a Volume (Deprecated)" Remove a brick using the following command: For example: Note If the remove-brick command is run with force or without any option, the data on the brick that you are removing will no longer be accessible at the glusterFS mount point. When using the start option, the data is migrated to other bricks, and on a successful commit the removed brick's information is deleted from the volume configuration. Data can still be accessed directly on the brick. You can view the status of the remove brick operation using the following command: For example: When the data migration shown in the status command is complete, run the following command to commit the brick removal: For example, Rerun the attach-tier command only with the required set of bricks: # gluster volume tier VOLNAME attach [replica COUNT ] BRICK ... For example, Important When you attach a tier, an internal process called fix-layout commences internally to prepare the hot tier for use. This process takes time and there will a delay in starting the tiering activities. 11.8.2.2. Shrinking a Hot Tier Volume You must first decide on which bricks should be part of the hot tiered volume and which bricks should be removed from the hot tier volume. Detach the tier by performing the steps listed in Section 16.7, "Detaching a Tier from a Volume (Deprecated)" Rerun the attach-tier command only with the required set of bricks: # gluster volume tier VOLNAME attach [replica COUNT] brick... Important When you reattach a tier, an internal process called fix-layout commences internally to prepare the hot tier for use. This process takes time and there will a delay in starting the tiering activities. 11.8.3. Stopping a remove-brick Operation A remove-brick operation that is in progress can be stopped by using the stop command. Note Files that were already migrated during remove-brick operation will not be migrated back to the same brick when the operation is stopped. To stop remove brick operation, use the following command: For example:
[ "gluster volume remove-brick VOLNAME BRICK start", "gluster volume remove-brick test-volume server2:/rhgs/brick2 start Remove Brick start successful", "gluster volume remove-brick VOLNAME BRICK status", "gluster volume remove-brick test-volume server2:/rhgs/brick2 status Node Rebalanced size scanned failures skipped status run time -files in h:m:s ---------- --------- ------ ------ -------- ------ --------- -------- localhost 5032 43.4MB 27715 0 5604 completed 0:15:05 10.70.43.41 0 0Bytes 0 0 0 completed 0:08:18 volume rebalance: test-volume: success", "gluster volume remove-brick VOLNAME BRICK commit", "gluster volume remove-brick test-volume server2:/rhgs/brick2 commit", "gluster volume info", "gluster volume info Volume Name: test-volume Type: Distribute Status: Started Number of Bricks: 3 Bricks: Brick1: server1:/rhgs/brick1 Brick3: server3:/rhgs/brick3 Brick4: server4:/rhgs/brick4", "gluster volume remove-brick VOLNAME BRICK start", "gluster volume remove-brick MASTER_VOL MASTER_HOST:/rhgs/brick2 start Remove Brick start successful", "gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL config checkpoint now", "gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL status detail", "gluster volume remove-brick VOLNAME BRICK status", "gluster volume remove-brick MASTER_VOL MASTER_HOST:/rhgs/brick2 status", "gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL stop", "gluster volume remove-brick VOLNAME BRICK commit", "gluster volume remove-brick MASTER_VOL MASTER_HOST:/rhgs/brick2 commit", "gluster volume info", "gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start", "gluster volume remove-brick VOLNAME BRICK start", "gluster volume remove-brick test-volume server2:/rhgs/brick2 start Remove Brick start successful", "gluster volume remove-brick VOLNAME BRICK status", "gluster volume remove-brick test-volume server2:/rhgs/brick2 status Node Rebalanced-files size scanned failures status --------- ----------- ----------- ----------- ----------- ------------ localhost 16 16777216 52 0 in progress 192.168.1.1 13 16723211 47 0 in progress", "gluster volume remove-brick VOLNAME BRICK commit", "gluster volume remove-brick test-volume server2:/rhgs/brick2 commit", "gluster volume tier test-volume attach replica 3 server1:/rhgs/tier1 server2:/rhgs/tier2 server1:/rhgs/tier3 server2:/rhgs/tier4", "gluster volume remove-brick VOLNAME BRICK stop", "gluster volume remove-brick test-volume server1:/rhgs/brick1/ server2:/brick2/ stop Node Rebalanced-files size scanned failures skipped status run-time in secs ---- ------- ---- ---- ------ ----- ----- ------ localhost 23 376Bytes 34 0 0 stopped 2.00 rhs1 0 0Bytes 88 0 0 stopped 2.00 rhs2 0 0Bytes 0 0 0 not started 0.00 'remove-brick' process may be in the middle of a file migration. The process will be fully stopped once the migration of the file is complete. Please check remove-brick process for completion before doing any further brick related tasks on the volume." ]
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/sect-shrinking_volumes
Chapter 4. Collections and content signing in private automation hub
Chapter 4. Collections and content signing in private automation hub As an automation administrator for your organization, you can configure private automation hub for signing and publishing Ansible content collections from different groups within your organization. For additional security, automation creators can configure Ansible-Galaxy CLI to verify these collections to ensure they have not been changed after they were uploaded to automation hub. 4.1. Configuring content signing on private automation hub To successfully sign and publish Ansible Certified Content Collections, you must configure private automation hub for signing. Prerequisites Your GnuPG key pairs have been securely set up and managed by your organization. Your public/private key pair has proper access for configuring content signing on private automation hub. Procedure Create a signing script that accepts only a filename. Note This script acts as the signing service and must generate an ascii-armored detached gpg signature for that file using the key specified through the PULP_SIGNING_KEY_FINGERPRINT environment variable. The script then prints out a JSON structure with the following format. {"file": "filename", "signature": "filename.asc"} All the file names are relative paths inside the current working directory. The file name must remain the same for the detached signature, as shown. The following example shows a script that produces signatures for content: #!/usr/bin/env bash FILE_PATH=USD1 SIGNATURE_PATH="USD1.asc" ADMIN_ID="USDPULP_SIGNING_KEY_FINGERPRINT" PASSWORD="password" # Create a detached signature gpg --quiet --batch --pinentry-mode loopback --yes --passphrase \ USDPASSWORD --homedir ~/.gnupg/ --detach-sign --default-key USDADMIN_ID \ --armor --output USDSIGNATURE_PATH USDFILE_PATH # Check the exit status STATUS=USD? if [ USDSTATUS -eq 0 ]; then echo {\"file\": \"USDFILE_PATH\", \"signature\": \"USDSIGNATURE_PATH\"} else exit USDSTATUS fi After you deploy a private automation hub with signing enabled to your Ansible Automation Platform cluster, new UI additions display when you interact with collections. Review the Ansible Automation Platform installer inventory file for options that begin with automationhub_* . [all:vars] . . . automationhub_create_default_collection_signing_service = True automationhub_auto_sign_collections = True automationhub_require_content_approval = True automationhub_collection_signing_service_key = /abs/path/to/galaxy_signing_service.gpg automationhub_collection_signing_service_script = /abs/path/to/collection_signing.sh The two new keys ( automationhub_auto_sign_collections and automationhub_require_content_approval ) indicate that the collections must be signed and require approval after they are uploaded to private automation hub. 4.2. Using content signing services in private automation hub After you have configured content signing on your private automation hub, you can manually sign a new collection or replace an existing signature with a new one so that users who want to download a specific collection have the assurance that the collection is intended for them and has not been modified after certification. Content signing on private automation hub provides solutions for the following scenarios: Your system does not have automatic signing configured and you must use a manual signing process to sign collections. The current signatures on the automatically configured collections are corrupted and must be replaced with new signatures. Additional signatures are required for previously signed content. You want to rotate signatures on your collections. Procedure Log in to your private automation hub instance in the automation hub UI. In the left navigation, click Collections Approval . The Approval dashboard is displayed with a list of collections. Click Sign and approve for each collection you want to sign. Verify that the collections you signed and manually approved are displayed in the Collections tab. 4.3. Downloading signature public keys After you sign and approve collections, download the signature public keys from the automation hub UI. You must download the public key before you add it to the local system keyring. Procedure Log in to your private automation hub instance in the automation hub UI. In the navigation pane, select Signature Keys . The Signature Keys dashboard displays a list of multiple keys: collections and container images. To verify collections, download the key prefixed with collections- . To verify container images, download the key prefixed with container- . Choose one of the following methods to download your public key: Select the menu icon and click Download Key to download the public key. Select the public key from the list and click the Copy to clipboard icon. Click the drop-down menu under the Public Key tab and copy the entire public key block. Use the public key that you copied to verify the content collection that you are installing. 4.4. Configuring Ansible-Galaxy CLI to verify collections You can configure Ansible-Galaxy CLI to verify collections. This ensures that collections you download are approved by your organization and have not been changed after they were uploaded to automation hub. If a collection has been signed by automation hub, the server provides ASCII armored, GPG-detached signatures to verify the authenticity of MANIFEST.json before using it to verify the collection's contents. You must opt into signature verification by configuring a keyring for ansible-galaxy or providing the path with the --keyring option. Prerequisites Signed collections are available in automation hub to verify signature. Certified collections can be signed by approved roles within your organization. Public key for verification has been added to the local system keyring. Procedure To import a public key into a non-default keyring for use with ansible-galaxy , run the following command. gpg --import --no-default-keyring --keyring ~/.ansible/pubring.kbx my-public-key.asc Note In addition to any signatures provided by the automation hub, signature sources can also be provided in the requirements file and on the command line. Signature sources should be URIs. Use the --signature option to verify the collection name provided on the CLI with an additional signature. ansible-galaxy collection install namespace.collection --signature https://examplehost.com/detached_signature.asc --signature file:///path/to/local/detached_signature.asc --keyring ~/.ansible/pubring.kbx You can use this option multiple times to provide multiple signatures. Confirm that the collections in a requirements file list any additional signature sources following the collection's signatures key, as in the following example. # requirements.yml collections: - name: ns.coll version: 1.0.0 signatures: - https://examplehost.com/detached_signature.asc - file:///path/to/local/detached_signature.asc ansible-galaxy collection verify -r requirements.yml --keyring ~/.ansible/pubring.kbx When you install a collection from automation hub, the signatures provided by the server are saved along with the installed collections to verify the collection's authenticity. (Optional) If you need to verify the internal consistency of your collection again without querying the Ansible Galaxy server, run the same command you used previously using the --offline option.
[ "{\"file\": \"filename\", \"signature\": \"filename.asc\"}", "#!/usr/bin/env bash FILE_PATH=USD1 SIGNATURE_PATH=\"USD1.asc\" ADMIN_ID=\"USDPULP_SIGNING_KEY_FINGERPRINT\" PASSWORD=\"password\" Create a detached signature gpg --quiet --batch --pinentry-mode loopback --yes --passphrase USDPASSWORD --homedir ~/.gnupg/ --detach-sign --default-key USDADMIN_ID --armor --output USDSIGNATURE_PATH USDFILE_PATH Check the exit status STATUS=USD? if [ USDSTATUS -eq 0 ]; then echo {\\\"file\\\": \\\"USDFILE_PATH\\\", \\\"signature\\\": \\\"USDSIGNATURE_PATH\\\"} else exit USDSTATUS fi", "[all:vars] . . . automationhub_create_default_collection_signing_service = True automationhub_auto_sign_collections = True automationhub_require_content_approval = True automationhub_collection_signing_service_key = /abs/path/to/galaxy_signing_service.gpg automationhub_collection_signing_service_script = /abs/path/to/collection_signing.sh", "gpg --import --no-default-keyring --keyring ~/.ansible/pubring.kbx my-public-key.asc", "ansible-galaxy collection install namespace.collection --signature https://examplehost.com/detached_signature.asc --signature file:///path/to/local/detached_signature.asc --keyring ~/.ansible/pubring.kbx", "requirements.yml collections: - name: ns.coll version: 1.0.0 signatures: - https://examplehost.com/detached_signature.asc - file:///path/to/local/detached_signature.asc ansible-galaxy collection verify -r requirements.yml --keyring ~/.ansible/pubring.kbx" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/managing_red_hat_certified_and_ansible_galaxy_collections_in_automation_hub/assembly-collections-and-content-signing-in-pah
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.2/html/developing_and_compiling_your_red_hat_build_of_quarkus_applications_with_apache_maven/making-open-source-more-inclusive
Chapter 1. Sample projects and business assets in Business Central
Chapter 1. Sample projects and business assets in Business Central Business Central contains sample projects with business assets that you can use as a reference for the rules or other assets that you create in your own Red Hat Decision Manager projects. Each sample project is designed differently to demonstrate decision management or business optimization assets and logic in Red Hat Decision Manager. Note Red Hat does not provide support for the sample code included in the Red Hat Decision Manager distribution. The following sample projects are available in Business Central: Course_Scheduling : (Business optimization) Course scheduling and curriculum decision process. Assigns lectures to rooms and determines a student's curriculum based on factors such as course conflicts and class room capacity. Dinner_Party : (Business optimization) Guest seating optimization using guided decision tables. Assigns guest seating based on each guest's job type, political beliefs, and known relationships. Employee_Rostering : (Business optimization) Employee rostering optimization using decision and solver assets. Assigns employees to shifts based on skills. Evaluation_Process : (Process automation) Evaluation process using business process assets. Evaluates employees based on performance. IT_Orders : (Process automation and case management) Ordering case using business process and case management assets. Places an IT hardware order based on needs and approvals. Mortgages : (Decision management with rules) Loan approval process using rule-based decision assets. Determines loan eligibility based on applicant data and qualifications. Mortgage_Process : (Process automation) Loan approval process using business process and decision assets. Determines loan eligibility based on applicant data and qualifications. OptaCloud : (Business optimization) Resource allocation optimization using decision and solver assets. Assigns processes to computers with limited resources. Traffic_Violation : (Decision management with DMN) Traffic violation decision service using a Decision Model and Notation (DMN) model. Determines driver penalty and suspension based on traffic violations. 1.1. Accessing sample projects and business assets in Business Central You can use the sample projects in Business Central to explore business assets as a reference for the rules or other assets that you create in your own Red Hat Decision Manager projects. Prerequisites Business Central is installed and running. For installation options, see Planning a Red Hat Decision Manager installation . Procedure In Business Central, go to Menu Design Projects . If there are existing projects, you can access the samples by clicking the MySpace default space and selecting Try Samples from the Add Project drop-down menu. If there are no existing projects, click Try samples . Review the descriptions for each sample project to determine which project you want to explore. Each sample project is designed differently to demonstrate decision management or business optimization assets and logic in Red Hat Decision Manager. Select one or more sample projects and click Ok to add the projects to your space. In the Projects page of your space, select one of the sample projects to view the assets for that project. Select each asset to explore how the project is designed to achieve the specified goal or workflow. Some of the sample projects contain more than one page of assets. Click the left or right arrows in the upper-right corner to view the full asset list. Figure 1.1. Asset page selection In the upper-right corner of the project Assets page, click Build to build the sample project or Deploy to build the project and then deploy it to KIE Server. Note You can also select the Build & Install option to build the project and publish the KJAR file to the configured Maven repository without deploying to a KIE Server. In a development environment, you can click Deploy to deploy the built KJAR file to a KIE Server without stopping any running instances (if applicable), or click Redeploy to deploy the built KJAR file and replace all instances. The time you deploy or redeploy the built KJAR, the deployment unit (KIE container) is automatically updated in the same target KIE Server. In a production environment, the Redeploy option is disabled and you can click Deploy only to deploy the built KJAR file to a new deployment unit (KIE container) on a KIE Server. To configure the KIE Server environment mode, set the org.kie.server.mode system property to org.kie.server.mode=development or org.kie.server.mode=production . To configure the deployment behavior for a corresponding project in Business Central, go to project Settings General Settings Version , toggle the Development Mode option, and click Save . By default, KIE Server and all new projects in Business Central are in development mode. You cannot deploy a project with Development Mode turned on or with a manually added SNAPSHOT version suffix to a KIE Server that is in production mode. To review project deployment details, click View deployment details in the deployment banner at the top of the screen or in the Deploy drop-down menu. This option directs you to the Menu Deploy Execution Servers page.
null
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/getting_started_with_red_hat_decision_manager/decision-examples-central-con_getting-started-decision-services
Chapter 132. KafkaBridgeStatus schema reference
Chapter 132. KafkaBridgeStatus schema reference Used in: KafkaBridge Property Property type Description conditions Condition array List of status conditions. observedGeneration integer The generation of the CRD that was last reconciled by the operator. url string The URL at which external client applications can access the Kafka Bridge. labelSelector string Label selector for pods providing this resource. replicas integer The current number of pods being used to provide this resource.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-KafkaBridgeStatus-reference
Chapter 8. Additional resources
Chapter 8. Additional resources Converting from a Linux distribution to RHEL using the Convert2RHEL utility How to perform an unsupported conversion from a RHEL-derived Linux distribution to RHEL Red Hat Enterprise Linux technology capabilities and limits Convert2RHEL FAQ (Frequently Asked Questions)
null
https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/converting_from_a_linux_distribution_to_rhel_using_the_convert2rhel_utility_in_red_hat_insights/additional_resources
Chapter 3. Ansible vault
Chapter 3. Ansible vault Sometimes your playbook needs to use sensitive data such as passwords, API keys, and other secrets to configure managed hosts. Storing this information in plain text in variables or other Ansible-compatible files is a security risk because any user with access to those files can read the sensitive data. With Ansible vault, you can encrypt, decrypt, view, and edit sensitive information. They could be included as: Inserted variable files in an Ansible Playbook Host and group variables Variable files passed as arguments when executing the playbook Variables defined in Ansible roles You can use Ansible vault to securely manage individual variables, entire files, or even structured data like YAML files. This data can then be safely stored in a version control system or shared with team members without exposing sensitive information. Important Files are protected with symmetric encryption of the Advanced Encryption Standard (AES256), where a single password or passphrase is used both to encrypt and decrypt the data. Note that the way this is done has not been formally audited by a third party. To simplify management, it makes sense to set up your Ansible project so that sensitive variables and all other variables are kept in separate files, or directories. Then you can protect the files containing sensitive variables with the ansible-vault command. Creating an encrypted file The following command prompts you for a new vault password. Then it opens a file for storing sensitive variables using the default editor. Viewing an encrypted file The following command prompts you for your existing vault password. Then it displays the sensitive contents of an already encrypted file. Editing an encrypted file The following command prompts you for your existing vault password. Then it opens the already encrypted file for you to update the sensitive variables using the default editor. Encrypting an existing file The following command prompts you for a new vault password. Then it encrypts an existing unencrypted file. Decrypting an existing file The following command prompts you for your existing vault password. Then it decrypts an existing encrypted file. Changing the password of an encrypted file The following command prompts you for your original vault password, then for the new vault password. Basic application of Ansible vault variables in a playbook --- - name: Create user accounts for all servers hosts: managed-node-01.example.com vars_files: - vault.yml tasks: - name: Create user from vault.yml file user: name: "{{ username }}" password: "{{ pwhash }}" You read-in the file with variables ( vault.yml ) in the vars_files section of your Ansible Playbook, and you use the curly brackets the same way you would do with your ordinary variables. Then you either run the playbook with the ansible-playbook --ask-vault-pass command and you enter the password manually. Or you save the password in a separate file and you run the playbook with the ansible-playbook --vault-password-file /path/to/my/vault-password-file command. Additional resources ansible-vault(1) , ansible-playbook(1) man pages on your system Ansible vault Ansible vault Best Practices
[ "ansible-vault create vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>", "ansible-vault view vault.yml Vault password: <vault_password> my_secret: \"yJJvPqhsiusmmPPZdnjndkdnYNDjdj782meUZcw\"", "ansible-vault edit vault.yml Vault password: <vault_password>", "ansible-vault encrypt vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password> Encryption successful", "ansible-vault decrypt vault.yml Vault password: <vault_password> Decryption successful", "ansible-vault rekey vault.yml Vault password: <vault_password> New Vault password: <vault_password> Confirm New Vault password: <vault_password> Rekey successful", "--- - name: Create user accounts for all servers hosts: managed-node-01.example.com vars_files: - vault.yml tasks: - name: Create user from vault.yml file user: name: \"{{ username }}\" password: \"{{ pwhash }}\"" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/automating_system_administration_by_using_rhel_system_roles/ansible-vault_automating-system-administration-by-using-rhel-system-roles
Chapter 4. Production phases
Chapter 4. Production phases Full Support Phase During the Full Support Phase, Red Hat will provide: Qualified critical and important security fixes Urgent and high priority bug fixes Select enhanced software functionality This will be delivered in the form of sub-minor releases. A release of {HubName} is supported under the Full Support Phase for 6 months after its initial release. Maintenance Support 1 Phase During the Maintenance Support 1 Phase, Red Hat will provide: Qualified critical security fixes Urgent bug fixes These fixes will be delivered in the form of sub-minor releases. A release of {HubName} is supported under the Maintenance Support 1 Phase for 6 months after it leaves the Full Support Phase. Maintenance Support 2 Phase During the Maintenance Support 2 Phase, Red Hat will provide: Qualified critical security fixes These fixes will be delivered in the form of sub-minor releases. A release of {HubName} is supported under the Maintenance Support 2 Phase for 6 months after it leaves the Maintenance Support 1 Phase. All updates are provided at Red Hat's discretion.
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/private_automation_hub_life_cycle/production_phases
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/3/html/installing_cryostat/making-open-source-more-inclusive
7.132. logrotate
7.132. logrotate 7.132.1. RHBA-2012:1172 - logrotate bug fix update Updated logrotate packages that fix one bug are now available for Red Hat Enterprise Linux 6. The logrotate utility simplifies the administration of multiple log files, allowing the automatic rotation, compression, removal, and mailing of log files. Bug Fix BZ# 827570 Attempting to send a file to a specific e-mail address failed if the "mailfirst" and "delaycompress" options were used at the same time. This was because logrotate searched for a file with the "gz" suffix, however the file had not yet been compressed. The underlying source code has been modified, and logrotate correctly finds and sends the file under these circumstances. All users of logrotate are advised to upgrade to these updated packages, which fix this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/logrotate
Chapter 3. Introduction to Red Hat Virtualization Products and Features
Chapter 3. Introduction to Red Hat Virtualization Products and Features This chapter introduces the main virtualization products and features available in Red Hat Enterprise Linux 7. 3.1. KVM and Virtualization in Red Hat Enterprise Linux KVM (Kernel-based Virtual Machine) is a full virtualization solution for Linux on a variety of architectures. It is built into the standard Red Hat Enterprise Linux 7 kernel and integrated with the Quick Emulator (QEMU), and it can run multiple guest operating systems. The KVM hypervisor in Red Hat Enterprise Linux is managed with the libvirt API, and tools built for libvirt (such as virt-manager and virsh ). Virtual machines are executed and run as multi-threaded Linux processes, controlled by these tools. Warning QEMU and libvirt also support a dynamic translation mode using the QEMU Tiny Code Generator (TCG), which does not require hardware virtualization support. This configuration is not supported by Red Hat. For more information about this limitation, see the Red Hat Enterprise Linux 7 Virtualization Deployment and Administration Guide . Figure 3.1. KVM architecture Virtualization features supported by KVM on Red Hat Enterprise 7 include the following: Overcommitting The KVM hypervisor supports overcommitting of system resources. Overcommitting means allocating more virtualized CPUs or memory than the available resources on the system, so the resources can be dynamically swapped when required by one guest and not used by another. This can improve how efficiently guests use the resources of the host, and can make it possible for the user to require fewer hosts. Important Overcommitting involves possible risks to system stability. For more information on overcommitting with KVM, and the precautions that should be taken, see the Red Hat Enterprise Linux 7 Virtualization Deployment and Administration Guide . KSM Kernel Same-page Merging (KSM) , used by the KVM hypervisor, enables KVM guests to share identical memory pages. These shared pages are usually common libraries or other identical, high-use data. KSM allows for greater guest density of identical or similar guest operating systems by avoiding memory duplication. Note For more information on KSM, see the Red Hat Enterprise Linux 7 Virtualization Tuning and Optimization Guide . QEMU guest agent The QEMU guest agent runs on the guest operating system and makes it possible for the host machine to issue commands to the guest operating system. Note For more information on the QEMU guest agent, see the Red Hat Enterprise Linux 7 Virtualization Deployment and Administration Guide . Disk I/O throttling When several virtual machines are running simultaneously, they can interfere with the overall system performance by using excessive disk I/O. Disk I/O throttling in KVM provides the ability to set a limit on disk I/O requests sent from individual virtual machines to the host machine. This can prevent a virtual machine from over-utilizing shared resources, and impacting the performance of other virtual machines. Note For instructions on using disk I/O throttling, see the Red Hat Enterprise Linux 7 Virtualization Tuning and Optimization Guide . Automatic NUMA balancing Automatic non-uniform memory access (NUMA) balancing moves tasks, which can be threads or processes closer to the memory they are accessing. This improves the performance of applications running on non-uniform memory access (NUMA) hardware systems, without any manual tuning required for Red Hat Enterprise Linux 7 guests. Note For more information on automatic NUMA balancing, see the Red Hat Enterprise Linux 7 Virtualization Tuning and Optimization Guide . Virtual CPU hot add Virtual CPU (vCPU) hot add capability provides the ability to increase processing power on running virtual machines as needed, without shutting down the guests. The vCPUs assigned to a virtual machine can be added to a running guest to either meet the workload's demands, or to maintain the Service Level Agreement (SLA) associated with the workload. Note For more information on virtual CPU hot add, see the Red Hat Enterprise Linux 7 Virtualization Deployment and Administration Guide . Nested virtualization As a Technology Preview, Red Hat Enterprise Linux 7.2 and later offers hardware-assisted nested virtualization. This feature enables KVM guests to act as hypervisors and create their own guests. This can for example be used for debugging hypervisors on a virtual machine or testing larger virtual deployments on a limited amount of physical machines. Note For further information on setting up and using nested virtualization, see Red Hat Enterprise Linux 7 Virtualization Deployment and Administration Guide . KVM guest virtual machine compatibility Red Hat Enterprise Linux 7 servers have certain support limits. The following URLs explain the processor and memory amount limitations for Red Hat Enterprise Linux: For the host system: https://access.redhat.com/site/articles/rhel-limits For the KVM hypervisor: https://access.redhat.com/site/articles/rhel-kvm-limits For a complete chart of supported operating systems and host and guest combinations see Red Hat Customer Portal Note To verify whether your processor supports virtualization extensions and for information on enabling virtualization extensions if they are disabled, see the Red Hat Enterprise Linux 7 Virtualization Deployment and Administration Guide .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_getting_started_guide/chap-Virtualization_Getting_Started-Products
Chapter 2. Deploy OpenShift Data Foundation using local storage devices
Chapter 2. Deploy OpenShift Data Foundation using local storage devices Deploying OpenShift Data Foundation on OpenShift Container Platform using local storage devices provides you with the option to create internal cluster resources. Follow this deployment method to use local storage to back persistent volumes for your OpenShift Container Platform applications. Use this section to deploy OpenShift Data Foundation on IBM Z infrastructure where OpenShift Container Platform is already installed. 2.1. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.16 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 2.2. Installing Local Storage Operator Install the Local Storage Operator from the Operator Hub before creating Red Hat OpenShift Data Foundation clusters on local storage devices. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Type local storage in the Filter by keyword box to find the Local Storage Operator from the list of operators, and click on it. Set the following options on the Install Operator page: Update channel as stable . Installation mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-local-storage . Update approval as Automatic . Click Install . Verification steps Verify that the Local Storage Operator shows a green tick indicating successful installation. 2.3. Finding available storage devices (optional) This step is additional information and can be skipped as the disks are automatically discovered during storage cluster creation. Use this procedure to identify the device names for each of the three or more worker nodes that you have labeled with the OpenShift Data Foundation label cluster.ocs.openshift.io/openshift-storage='' before creating Persistent Volumes (PV) for IBM Z. Procedure List and verify the name of the worker nodes with the OpenShift Data Foundation label. Example output: Log in to each worker node that is used for OpenShift Data Foundation resources and find the unique by-id device name for each available raw block device. Example output: In this example, for bmworker01 , the available local device is sdb . Identify the unique ID for each of the devices selected in Step 2. In the above example, the ID for the local device sdb Repeat the above step to identify the device ID for all the other nodes that have the storage devices to be used by OpenShift Data Foundation. See this Knowledge Base article for more details. 2.4. Enabling DASD devices If you are using DASD devices, you must enable them before creating an OpenShift Data Foundation cluster on IBM Z. Once the DASD devices are available to z/VM guests, complete the following steps from the compute or infrastructure node on which an OpenShift Data Foundation storage node is being installed. Procedure To enable the DASD device, run the following command: 1 For <device_bus_id>, specify the ID of the DASD device bus-ID. For example, 0.0.b100 . To verify the status of the DASD device you can use the the lsdasd and lsblk commands. To low-level format the device and specify the disk name, run the following command: 1 For <device_name>, specify the disk name. For example, dasdb . Important The use of DASD quick-formatting Extent Space Efficient (ESE) DASD is not supported on OpenShift Data Foundation. If you are using ESE DASDs, make sure to disable quick-formatting with the --mode=full parameter. To auto-create one partition using the whole disk, run the following command: 1 For <device_name>, enter the disk name you have specified in the step. For example, dasdb . Once these steps are completed, the device is available during OpenShift Data Foundation deployment as /dev/dasdb1 . Important During LocalVolumeSet creation, make sure to select only the Part option as device type. Additional resources For details on the commands, see Commands for Linux on IBM Z in IBM documentation. 2.5. Creating OpenShift Data Foundation cluster on IBM Z Use this procedure to create an OpenShift Data Foundation cluster on IBM Z. Prerequisites Ensure that all the requirements in the Requirements for installing OpenShift Data Foundation using local storage devices section are met. You must have at least three worker nodes with the same storage type and size attached to each node (for example, 200 GB) to use local storage devices on IBM Z or IBM(R) LinuxONE. Procedure In the OpenShift Web Console, click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click on the OpenShift Data Foundation operator and then click Create StorageSystem . In the Backing storage page, perform the following: Select the Create a new StorageClass using the local storage devices for Backing storage type option. Select Full Deployment for the Deployment type option. Click . Important You are prompted to install the Local Storage Operator if it is not already installed. Click Install , and follow the procedure as described in Installing Local Storage Operator . In the Create local volume set page, provide the following information: Enter a name for the LocalVolumeSet and the StorageClass . By default, the local volume set name appears for the storage class name. You can change the name. Choose one of the following: Disks on all nodes Uses the available disks that match the selected filters on all the nodes. Disks on selected nodes Uses the available disks that match the selected filters only on the selected nodes. Important The flexible scaling feature is enabled only when the storage cluster that you created with three or more nodes are spread across fewer than the minimum requirement of three availability zones. For information about flexible scaling, see knowledgebase article on Scaling OpenShift Data Foundation cluster using YAML when flexible scaling is enabled . Flexible scaling features get enabled at the time of deployment and can not be enabled or disabled later on. If the nodes selected do not match the OpenShift Data Foundation cluster requirement of an aggregated 30 CPUs and 72 GiB of RAM, a minimal cluster is deployed. For minimum starting node requirements, see the Resource requirements section in the Planning guide. From the available list of Disk Type , select SSD/NVME . Expand the Advanced section and set the following options: Volume Mode Block is selected by default. Device Type Select one or more device type from the dropdown list. By default, Disk and Part options are included in the Device Type field. Note For a multi-path device, select the Mpath option from the drop-down exclusively. For a DASD-based cluster, ensure that only the Part option is included in the Device Type and remove the Disk option. Disk Size Set a minimum size of 100GB for the device and maximum available size of the device that needs to be included. Maximum Disks Limit This indicates the maximum number of PVs that can be created on a node. If this field is left empty, then PVs are created for all the available disks on the matching nodes. Click . A pop-up to confirm the creation of LocalVolumeSet is displayed. Click Yes to continue. In the Capacity and nodes page, configure the following: Available raw capacity is populated with the capacity value based on all the attached disks associated with the storage class. This takes some time to show up. The Selected nodes list shows the nodes based on the storage class. You can check the box to select Taint nodes. Click . Optional: In the Security and network page, configure the following based on your requirement: To enable encryption, select Enable data encryption for block and file storage . Choose one or both of the following Encryption level : Cluster-wide encryption Encrypts the entire cluster (block and file). StorageClass encryption Creates encrypted persistent volume (block only) using encryption enabled storage class. Select Connect to an external key management service checkbox. This is optional for cluster-wide encryption. Key Management Service Provider is set to Vault by default. Enter Vault Service Name , host Address of Vault server ('https:// <hostname or ip> ''), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide CA Certificate , Client Certificate and Client Private Key . Click Save . Select Default (SDN) as Multus is not yet supported on OpenShift Data Foundation on IBM Z. Click . In the Data Protection page, if you are configuring Regional-DR solution for Openshift Data Foundation then select the Prepare cluster for disaster recovery (Regional-DR only) checkbox, else click . In the Review and create page:: Review the configuration details. To modify any configuration settings, click Back to go back to the configuration page. Click Create StorageSystem . Verification steps To verify the final Status of the installed storage cluster: In the OpenShift Web Console, navigate to Installed Operators OpenShift Data Foundation Storage System ocs-storagecluster-storagesystem Resources . Verify that Status of StorageCluster is Ready and has a green tick mark to it. To verify if flexible scaling is enabled on your storage cluster, perform the following steps: In the OpenShift Web Console, navigate to Installed Operators OpenShift Data Foundation Storage System ocs-storagecluster-storagesystem Resources ocs-storagecluster . In the YAML tab, search for the keys flexibleScaling in spec section and failureDomain in status section. If flexible scaling is true and failureDomain is set to host, flexible scaling feature is enabled. To verify that all components for OpenShift Data Foundation are successfully installed, see Verifying your OpenShift Data Foundation deployment . Additional resources To expand the capacity of the initial cluster, see the Scaling Storage guide.
[ "oc annotate namespace openshift-storage openshift.io/node-selector=", "oc get nodes -l=cluster.ocs.openshift.io/openshift-storage=", "NAME STATUS ROLES AGE VERSION bmworker01 Ready worker 6h45m v1.16.2 bmworker02 Ready worker 6h45m v1.16.2 bmworker03 Ready worker 6h45m v1.16.2", "oc debug node/<node name>", "oc debug node/bmworker01 Starting pod/bmworker01-debug To use host binaries, run `chroot /host` Pod IP: 10.0.135.71 If you don't see a command prompt, try pressing enter. sh-4.2# chroot /host sh-4.4# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 500G 0 loop sda 8:0 0 120G 0 disk |-sda1 8:1 0 384M 0 part /boot `-sda4 8:4 0 119.6G 0 part `-coreos-luks-root-nocrypt 253:0 0 119.6G 0 dm /sysroot sdb 8:16 0 500G 0 disk", "sh-4.4#ls -l /dev/disk/by-id/ | grep sdb lrwxrwxrwx. 1 root root 9 Feb 3 16:49 scsi-360050763808104bc2800000000000259 -> ../../sdb lrwxrwxrwx. 1 root root 9 Feb 3 16:49 scsi-SIBM_2145_00e020412f0aXX00 -> ../../sdb lrwxrwxrwx. 1 root root 9 Feb 3 16:49 scsi-0x60050763808104bc2800000000000259 -> ../../sdb", "scsi-0x60050763808104bc2800000000000259", "sudo chzdev -e <device_bus_id> 1", "sudo dasdfmt /dev/<device_name> -b 4096 -p --mode=full 1", "sudo fdasd -a /dev/<device_name> 1", "spec: flexibleScaling: true [...] status: failureDomain: host" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/deploying_openshift_data_foundation_using_ibm_z/deploy-using-local-storage-devices-ibmz
probe::nfsd.read
probe::nfsd.read Name probe::nfsd.read - NFS server reading data from a file for client Synopsis nfsd.read Values offset the offset of file vlen read blocks file argument file, indicates if the file has been opened. fh file handle (the first part is the length of the file handle) count read bytes client_ip the ip address of client size read bytes vec struct kvec, includes buf address in kernel address and length of each buffer
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-nfsd-read
Chapter 34. ExternalLogging schema reference
Chapter 34. ExternalLogging schema reference Used in: CruiseControlSpec , EntityTopicOperatorSpec , EntityUserOperatorSpec , KafkaBridgeSpec , KafkaClusterSpec , KafkaConnectSpec , KafkaMirrorMaker2Spec , KafkaMirrorMakerSpec , ZookeeperClusterSpec The type property is a discriminator that distinguishes use of the ExternalLogging type from InlineLogging . It must have the value external for the type ExternalLogging . Property Description type Must be external . string valueFrom ConfigMap entry where the logging configuration is stored. ExternalConfigurationReference
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-externallogging-reference
Chapter 31. Associating secondary interfaces metrics to network attachments
Chapter 31. Associating secondary interfaces metrics to network attachments 31.1. Extending secondary network metrics for monitoring Secondary devices, or interfaces, are used for different purposes. It is important to have a way to classify them to be able to aggregate the metrics for secondary devices with the same classification. Exposed metrics contain the interface but do not specify where the interface originates. This is workable when there are no additional interfaces. However, if secondary interfaces are added, it can be difficult to use the metrics since it is hard to identify interfaces using only interface names. When adding secondary interfaces, their names depend on the order in which they are added, and different secondary interfaces might belong to different networks and can be used for different purposes. With pod_network_name_info it is possible to extend the current metrics with additional information that identifies the interface type. In this way, it is possible to aggregate the metrics and to add specific alarms to specific interface types. The network type is generated using the name of the related NetworkAttachmentDefinition , that in turn is used to differentiate different classes of secondary networks. For example, different interfaces belonging to different networks or using different CNIs use different network attachment definition names. 31.1.1. Network Metrics Daemon The Network Metrics Daemon is a daemon component that collects and publishes network related metrics. The kubelet is already publishing network related metrics you can observe. These metrics are: container_network_receive_bytes_total container_network_receive_errors_total container_network_receive_packets_total container_network_receive_packets_dropped_total container_network_transmit_bytes_total container_network_transmit_errors_total container_network_transmit_packets_total container_network_transmit_packets_dropped_total The labels in these metrics contain, among others: Pod name Pod namespace Interface name (such as eth0 ) These metrics work well until new interfaces are added to the pod, for example via Multus , as it is not clear what the interface names refer to. The interface label refers to the interface name, but it is not clear what that interface is meant for. In case of many different interfaces, it would be impossible to understand what network the metrics you are monitoring refer to. This is addressed by introducing the new pod_network_name_info described in the following section. 31.1.2. Metrics with network name This daemonset publishes a pod_network_name_info gauge metric, with a fixed value of 0 : pod_network_name_info{interface="net0",namespace="namespacename",network_name="nadnamespace/firstNAD",pod="podname"} 0 The network name label is produced using the annotation added by Multus. It is the concatenation of the namespace the network attachment definition belongs to, plus the name of the network attachment definition. The new metric alone does not provide much value, but combined with the network related container_network_* metrics, it offers better support for monitoring secondary networks. Using a promql query like the following ones, it is possible to get a new metric containing the value and the network name retrieved from the k8s.v1.cni.cncf.io/networks-status annotation: (container_network_receive_bytes_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_receive_errors_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_receive_packets_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_receive_packets_dropped_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_transmit_bytes_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_transmit_errors_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_transmit_packets_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_transmit_packets_dropped_total) + on(namespace,pod,interface) group_left(network_name)
[ "pod_network_name_info{interface=\"net0\",namespace=\"namespacename\",network_name=\"nadnamespace/firstNAD\",pod=\"podname\"} 0", "(container_network_receive_bytes_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_receive_errors_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_receive_packets_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_receive_packets_dropped_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_transmit_bytes_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_transmit_errors_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_transmit_packets_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_transmit_packets_dropped_total) + on(namespace,pod,interface) group_left(network_name)" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/networking/associating-secondary-interfaces-metrics-to-network-attachments
14.3. LDAP Search Filters
14.3. LDAP Search Filters Search filters select the entries to be returned for a search operation. They are most commonly used with the ldapsearch command-line utility. When using ldapsearch , there can be multiple search filters in a file, with each filter on a separate line in the file, or a search filter can be specified directly on the command line. The basic syntax of a search filter is: For example: In this example, buildingname is the attribute, >= is the operator, and alpha is the value. Filters can also be defined that use different attributes combined together with Boolean operators. Note When performing a substring search using a matching rule filter, use the asterisk (*) character as a wildcard to represent zero or more characters. For example, to search for an attribute value that starts with the letter l and ends with the letter n , enter a l*n in the value portion of the search filter. Similarly, to search for all attribute values beginning with the letter u , enter a value of u* in the value portion of the search filter. To search for a value that contains the asterisk (*) character, the asterisk must be escaped with the designated escape sequence, \5c2a . For example, to search for all employees with businessCategory attribute values of Example*Net product line , enter the following value in the search filter: Note A common mistake is to assume that the directory is searched based on the attributes used in the distinguished name. The distinguished name is only a unique identifier for the directory entry and cannot be used as a search key. Instead, search for entries based on the attribute-data pairs stored on the entry itself. Thus, if the distinguished name of an entry is uid= user_name ,ou=People,dc=example,dc=com , then a search for dc=example does not match that entry unless the dc attribute exists in this entry and is set to example . 14.3.1. Using Attributes in Search Filters The most basic sort of search looks for the presence of attributes or specific values in entries. There are many variations on how to look for attributes in entries. It is possible to check that the attribute merely exists, to match an exact value, or to list matches against a partial value. A presence search uses a wild card (an asterisk) to return every entry which has that attribute set, regardless of value. For example, this returns every entry which has a manager attribute: It is also possible to search for an attribute with a specific value; this is called an equality search. For example: This search filter returns all entries that contain the common name set to example . Most of the time, equality searches are not case sensitive. When an attribute has values associated with a language tag, all of the values are returned. Thus, the following two attribute values both match the "(cn= example )" filter: It is also possible to search for a partial match on an attribute value, a substring index. For example: The length of the substring searches is configured in the substring index itself, as described in Section 13.6, "Changing the Width for Indexed Substring Searches" . 14.3.2. Using Operators in Search Filters Operators in search filters set the relationship between the attribute and the given search value. For people searches, operators can be used to set a range, to return a last names within a subset of letters in the alphabet or employee numbers that come after a certain number. Operators also enable phonetic and approximate searches, which allow more effective searches with imperfect information and are particularly useful in internationalized directories. The operators that can be used in search filters are listed in Table 14.2, "Search Filter Operators" . In addition to these search filters, special filters can be specified to work with a preferred language collation order. For information on how to search a directory with international charactersets, see Section D.4, "Searching an Internationalized Directory" . Table 14.2. Search Filter Operators Search Type Operator Description Equality = Returns entries containing attribute values that exactly match the specified value. For example, cn= example Substring = string * string Returns entries containing attributes containing the specified substring. For example, cn= exa*l . The asterisk (*) indicates zero (0) or more characters. Greater than or equal to >= Returns entries containing attributes that are greater than or equal to the specified value. For example, uidNumber >= 5000 . Less than or equal to <= Returns entries containing attributes that are less than or equal to the specified value. For example, uidNumber <= 5000 . Presence =* Returns entries containing one or more values for the specified attribute. For example, cn=* . Approximate ~= Returns entries containing the specified attribute with a value that is approximately equal to the value specified in the search filter. For example, l~=san fransico can return l=san francisco . 14.3.3. Using Compound Search Filters Multiple search filter components can be combined using Boolean operators expressed in prefix notation as follows: Boolean-operator is any one of the Boolean operators listed in Table 14.3, "Search Filter Boolean Operators" . For example, this filter returns all entries that do not contain the specified value: Obviously, compound search filters are most useful when they are nested together into completed expressions: These compound filters can be combined with other types of searches (approximate, substring, other operators) to get very detailed results. For example, this filter returns all entries whose organizational unit is Marketing and whose description attribute does not contain the substring X.500 : That filter can be expanded to return entries whose organizational unit is Marketing , that do not have the substring X.500 , and that have example or demo set as a manager : This filter returns all entries that do not represent a person and whose common name is similar to printer3b : Table 14.3. Search Filter Boolean Operators Operator Symbol Description AND & All specified filters must be true for the statement to be true. For example, (&(filter)(filter)(filter)...) . OR | At least one specified filter must be true for the statement to be true. For example, (|(filter)(filter)(filter)...) . NOT ! The specified statement must not be true for the statement to be true. Only one filter is affected by the NOT operator. For example, (!(filter)) . Boolean expressions are evaluated in the following order: Innermost to outermost parenthetical expressions first. All expressions from left to right. 14.3.4. Using Matching Rules A matching rule tells the Directory Server how to compare two values (the value stored in the attribute and the value in the search filter). A matching rule also defines how to generate index keys. Matching rules are somewhat related to attribute syntaxes. Syntaxes define the format of an attribute value; matching rules define how that format is compared and indexed. There are three different types of matching rules: EQUALITY specifies how to compare two values for an equal match. For example, how to handle strings like "Fred" and "FRED". Search filters that test for equality (for example, attribute=value ) use the EQUALITY rule. Equality (eq) indexes use the EQUALITY rule to generate the index keys. Update operations use the EQUALITY rule to compare values to be updated with values already in an entry. ORDERING specifies how to compare two values to see if one value is greater or less than another value. Search filters that set a range (for example, attribute<=value or attribute>=value ) use the ORDERING rule. An index for an attribute with an ORDERING rule orders the equality values. SUBSTR specifies how to do substring matching. Substring search filters (for example, attribute=*partial_string* or attribute=*end_string ) use the SUBSTR rule. Substring (sub) indexes use the SUBSTR rule to generate the index keys. Important A matching rule is required in order to support searching or indexing for the corresponding search filter or index type. For example, an attribute must have an EQUALITY matching rule in order to support equality search filters and eq indexes for that attribute. An attribute must have both an ORDERING matching rule and an EQUALITY matching rule in order to support range search filters and indexed range searches. A search operation will be rejected with PROTOCOL_ERROR or UNWILLING_TO_PERFORM if an attempt is made to use a search filter for an attribute that has no corresponding matching rule. Example 14.1. Matching Rules and Custom Attributes Example Corp. administrators create a custom attribute type called MyFirstName with IA5 String (7-bit ASCII) syntax and an EQUALITY matching rule of caseExactIA5Match. An entry with a MyFirstName value of Fred is returned in a search with a filter of (MyFirstName=Fred) , but it is not returned for filters like (MyFirstName=FRED) and (MyFirstName=fred) Fred , FRED , and fred are all valid IA5 String values, but they do not match using the caseExactIA5Match rule. For all three variants of Fred to be returned in a search, then the MyFirstName should be defined to use the caseIgnoreIA5Match matching rule. An extensible matching rule search filter can be used to search for an attribute value with a different matching rule than the one defined for the attribute. The matching rule must be compatible with the syntax of the attribute being searched. For example, to run a case insensitive search for an attribute that has a case-sensitive matching rule defined for it, specify a case insensitive matching rule in the search filter. Note Matching rules are used for searches in internationalized directories, to specify the language types to use for the results. This is covered in Section D.4, "Searching an Internationalized Directory" . Note An index for an attributes uses whatever matching rules are defined for that attribute in its schema definition. Additional matching rules to use for an index can be configured using the nsMatchingRule attribute, as in Section 13.2.1, "Creating Indexes Using the Command Line" . The syntax of the matching rule filter inserts a matching rule name or OID into the search filter: attr is an attribute belonging to entries being searched, such as cn or mail . matchingRule is a string that contains the name or OID of the rule to use to match attribute values according to the required syntax. value is either the attribute value to search for or a relational operator plus the attribute value to search for. The syntax of the value of the filter depends on the matching rule format used. A matching rule is actually a schema element, and, as with other schema elements is uniquely identified by an object identifier (OID). Many of the matching rules defined for Red Hat Directory Server relate to language codes and set internationalized collation orders supported by the Directory Server. For example, the OID 2.16.840.1.113730.3.3.2.17.1 identifies the Finnish collation order. Note Unlike other schema elements, additional matching rules cannot be added to the Directory Server configuration. Most of the matching rules list in following list are used for equality indexes. Matching rules with ordering in their name are used for ordering indexes, and those with substring in their name are used for substring (SUBSTR) indexes. (The matching rules used for international matching and collation orders use a different naming scheme.) Bitwise AND match Performs bitwise AND matches. OID: 1.2.840.113556.1.4.803 Compatible syntaxes: Typically used with Integer and numeric strings. Directory Server converts numeric strings automatically to integer. Bitwise OR match Performs bitwise OR matches. OID: 1.2.840.113556.1.4.804 Compatible syntaxes: Typically used with Integer and numeric strings. Directory Server converts numeric strings automatically to integer. booleanMatch Evaluates whether the values to match are TRUE or FALSE OID: 2.5.13.13 Compatible syntaxes: Boolean caseExactIA5Match Makes a case-sensitive comparison of values. OID: 1.3.6.1.4.1.1466.109.114.1 Compatible syntaxes: IA5 Syntax, URI caseExactMatch Makes a case-sensitive comparison of values. OID: 2.5.13.5 Compatible syntaxes: Directory String, Printable String, OID caseExactOrderingMatch Allows case-sensitive ranged searches (less than and greater than). OID: 2.5.13.6 Compatible syntaxes: Directory String, Printable String, OID caseExactSubstringsMatch Performs case-sensitive substring and index searches. OID: 2.5.13.7 Compatible syntaxes: Directory String, Printable String, OID caseIgnoreIA5Match Performs case-insensitive comparisons of values. OID: 1.3.6.1.4.1.1466.109.114.2 Compatible syntaxes: IA5 Syntax, URI caseIgnoreIA5SubstringsMatch Performs case-insensitive searches on substrings and indexes. OID: 1.3.6.1.4.1.1466.109.114.3 Compatible syntaxes: IA5 Syntax, URI caseIgnoreListMatch Performs case-insensitive comparisons of values. OID: 2.5.13.11 Compatible syntaxes: Postal address caseIgnoreListSubstringsMatch Performs case-insensitive searches on substrings and indexes. OID: 2.5.13.12 Compatible syntaxes: Postal address caseIgnoreMatch Performs case-insensitive comparisons of values. OID: 2.5.13.2 Compatible syntaxes: Directory String, Printable String, OID caseIgnoreOrderingMatch Allows case-insensitive ranged searches (less than and greater than). OID: 2.5.13.3 Compatible syntaxes: Directory String, Printable String, OID caseIgnoreSubstringsMatch Performs case-insensitive searches on substrings and indexes. OID: 2.5.13.4 Compatible syntaxes: Directory String, Printable String, OID distinguishedNameMatch Compares distinguished name values. OID: 2.5.13.1 Compatible syntaxes: Distinguished name (DN) generalizedTimeMatch Compares values that are in a Generalized Time format. OID: 2.5.13.27 Compatible syntaxes: Generalized Time generalizedTimeOrderingMatch Allows ranged searches (less than and greater than) on values that are in a Generalized Time format. OID: 2.5.13.28 Compatible syntaxes: Generalized Time integerMatch Evaluates integer values. OID: 2.5.13.14 Compatible syntaxes: Integer integerOrderingMatch Allows ranged searches (less than and greater than) on integer values. OID: 2.5.13.15 Compatible syntaxes: Integer keywordMatch Compares the given search value to a string in an attribute value. OID: 2.5.13.33 Compatible syntaxes: Directory String numericStringMatch Compares more general numeric values. OID: 2.5.13.8 Compatible syntaxes: Numeric String numericStringOrderingMatch Allows ranged searches (less than and greater than) on more general numeric values. OID: 2.5.13.9 Compatible syntaxes: Numeric String numericStringSubstringMatch Compares more general numeric values. OID: 2.5.13.10 Compatible syntaxes: Numeric String objectIdentifierMatch Compares object identifier (OID) values. OID: 2.5.13.0 Compatible syntaxes: OID octetStringMatch Evaluates octet string values. OID: 2.5.13.17 Compatible syntaxes: Octet String octetStringOrderingMatch Supports ranged searches (less than and greater than) on a series of octet string values. OID: 2.5.13.18 Compatible syntaxes: Octet String telephoneNumberMatch Evaluates telephone number values. OID: 2.5.13.20 Compatible syntaxes: Telephone Number telephoneNumberSubstringsMatch Performs substring and index searches on telephone number values. OID: 2.5.13.21 Compatible syntaxes: Telephone Number uniqueMemberMatch Compares both name and UID values. OID: 2.5.13.23 Compatible syntaxes: Name and Optional UID wordMatch Compares the given search value to a string in an attribute value. This matching rule is case-insensitive. OID: 2.5.13.32 Compatible syntaxes: Directory String Table 14.4. Language Ordering Matching Rules Matching Rule Object Identifiers (OIDs) English (Case Exact Ordering Match) 2.16.840.1.113730.3.3.2.11.3 Albanian (Case Insensitive Ordering Match) 2.16.840.1.113730.3.3.2.44.1 Arabic (Case Insensitive Ordering Match) 2.16.840.1.113730.3.3.2.1.1 Belorussian (Case Insensitive Ordering Match) 2.16.840.1.113730.3.3.2.2.1 Bulgarian (Case Insensitive Ordering Match) 2.16.840.1.113730.3.3.2.3.1 Catalan (Case Insensitive Ordering Match) 2.16.840.1.113730.3.3.2.4.1 Chinese - Simplified (Case Insensitive Ordering Match) 2.16.840.1.113730.3.3.2.49.1 Chinese - Traditional (Case Insensitive Ordering Match) 2.16.840.1.113730.3.3.2.50.1 Croatian (Case Insensitive Ordering Match) 2.16.840.1.113730.3.3.2.22.1 Czech (Case Insensitive Ordering Match) 2.16.840.1.113730.3.3.2.5.1 Danish (Case Insensitive Ordering Match) 2.16.840.1.113730.3.3.2.6.1 Dutch (Case Insensitive Ordering Match) 2.16.840.1.113730.3.3.2.33.1 Dutch - Belgian (Case Insensitive Ordering Match) 2.16.840.1.113730.3.3.2.34.1 English - US (Case Insensitive Ordering Match) 2.16.840.1.113730.3.3.2.11.1 English - Canadian (Case Insensitive Ordering Match) 2.16.840.1.113730.3.3.2.12.1 English - Irish (Case Insensitive Ordering Match) 2.16.840.1.113730.3.3.2.14.1 Estonian (Case Insensitive Ordering Match) 2.16.840.1.113730.3.3.2.16.1 Finnish (Case Insensitive Ordering Match) 2.16.840.1.113730.3.3.2.17.1 French (Case Insensitive Ordering Match) 2.16.840.1.113730.3.3.2.18.1 French - Belgian (Case Insensitive Ordering Match) 2.16.840.1.113730.3.3.2.19.1 French - Canadian (Case Insensitive Ordering Match) 2.16.840.1.113730.3.3.2.20.1 French - Swiss (Case Insensitive Ordering Match) 2.16.840.1.113730.3.3.2.21.1 German (Case Insensitive Ordering Match) 2.16.840.1.113730.3.3.2.7.1 German - Austrian (Case Insensitive Ordering Match) 2.16.840.1.113730.3.3.2.8.1 German - Swiss (Case Insensitive Ordering Match) 2.16.840.1.113730.3.3.2.9.1 Greek (Case Insensitive Ordering Match) 2.16.840.1.113730.3.3.2.10.1 Hebrew (Case Insensitive Ordering Match) 2.16.840.1.113730.3.3.2.27.1 Hungarian (Case Insensitive Ordering Match) 2.16.840.1.113730.3.3.2.23.1 Icelandic (Case Insensitive Ordering Match) 2.16.840.1.113730.3.3.2.24.1 Italian (Case Insensitive Ordering Match) 2.16.840.1.113730.3.3.2.25.1 Italian - Swiss (Case Insensitive Ordering Match) 2.16.840.1.113730.3.3.2.26.1 Japanese (Case Insensitive Ordering Match) 2.16.840.1.113730.3.3.2.28.1 Korean (Case Insensitive Ordering Match) 2.16.840.1.113730.3.3.2.29.1 Latvian, Lettish (Case Insensitive Ordering Match) 2.16.840.1.113730.3.3.2.31.1 Lithuanian (Case Insensitive Ordering Match) 2.16.840.1.113730.3.3.2.30.1 Macedonian (Case Insensitive Ordering Match) 2.16.840.1.113730.3.3.2.32.1 Norwegian (Case Insensitive Ordering Match) 2.16.840.1.113730.3.3.2.35.1 Norwegian - Bokmul (Case Insensitive Ordering Match) 2.16.840.1.113730.3.3.2.36.1 Norwegian - Nynorsk (Case Insensitive Ordering Match) 2.16.840.1.113730.3.3.2.37.1 Polish (Case Insensitive Ordering Match) 2.16.840.1.113730.3.3.2.38.1 Romanian (Case Insensitive Ordering Match) 2.16.840.1.113730.3.3.2.39.1 Russian (Case Insensitive Ordering Match) 2.16.840.1.113730.3.3.2.40.1 Serbian - Cyrillic (Case Insensitive Ordering Match) 2.16.840.1.113730.3.3.2.45.1 Serbian - Latin (Case Insensitive Ordering Match) 2.16.840.1.113730.3.3.2.41.1 Slovak (Case Insensitive Ordering Match) 2.16.840.1.113730.3.3.2.42.1 Slovenian (Case Insensitive Ordering Match) 2.16.840.1.113730.3.3.2.43.1 Spanish (Case Insensitive Ordering Match) 2.16.840.1.113730.3.3.2.15.1 Swedish (Case Insensitive Ordering Match) 2.16.840.1.113730.3.3.2.46.1 Turkish (Case Insensitive Ordering Match) 2.16.840.1.113730.3.3.2.47.1 Ukrainian (Case Insensitive Ordering Match) 2.16.840.1.113730.3.3.2.48.1 Table 14.5. Language Substring Matching Rules Matching Rule Object Identifiers (OIDs) English (Case Exact Substring Match) 2.16.840.1.113730.3.3.2.11.3.6 Albanian (Case Insensitive Substring Match) 2.16.840.1.113730.3.3.2.44.1.6 Arabic (Case Insensitive Substring Match) 2.16.840.1.113730.3.3.2.1.1.6 Belorussian (Case Insensitive Substring Match) 2.16.840.1.113730.3.3.2.2.1.6 Bulgarian (Case Insensitive Substring Match) 2.16.840.1.113730.3.3.2.3.1.6 Catalan (Case Insensitive Substring Match) 2.16.840.1.113730.3.3.2.4.1.6 Chinese - Simplified (Case Insensitive Substring Match) 2.16.840.1.113730.3.3.2.49.1.6 Chinese - Traditional (Case Insensitive Substring Match) 2.16.840.1.113730.3.3.2.50.1.6 Croatian (Case Insensitive Substring Match) 2.16.840.1.113730.3.3.2.22.1.6 Czech (Case Insensitive Substring Match) 2.16.840.1.113730.3.3.2.5.1.6 Danish (Case Insensitive Substring Match) 2.16.840.1.113730.3.3.2.6.1.6 Dutch (Case Insensitive Substring Match) 2.16.840.1.113730.3.3.2.33.1.6 Dutch - Belgian (Case Insensitive Substring Match) 2.16.840.1.113730.3.3.2.34.1.6 English - US (Case Insensitive Substring Match) 2.16.840.1.113730.3.3.2.11.1.6 English - Canadian (Case Insensitive Substring Match) 2.16.840.1.113730.3.3.2.12.1.6 English - Irish (Case Insensitive Substring Match) 2.16.840.1.113730.3.3.2.14.1.6 Estonian (Case Insensitive Substring Match) 2.16.840.1.113730.3.3.2.16.1.6 Finnish (Case Insensitive Substring Match) 2.16.840.1.113730.3.3.2.17.1.6 French (Case Insensitive Substring Match) 2.16.840.1.113730.3.3.2.18.1.6 French - Belgian (Case Insensitive Substring Match) 2.16.840.1.113730.3.3.2.19.1.6 French - Canadian (Case Insensitive Substring Match) 2.16.840.1.113730.3.3.2.20.1.6 French - Swiss (Case Insensitive Substring Match) 2.16.840.1.113730.3.3.2.21.1.6 German (Case Insensitive Substring Match) 2.16.840.1.113730.3.3.2.7.1.6 German - Austrian (Case Insensitive Substring Match) 2.16.840.1.113730.3.3.2.8.1.6 German - Swiss (Case Insensitive Substring Match) 2.16.840.1.113730.3.3.2.9.1.6 Greek (Case Insensitive Substring Match) 2.16.840.1.113730.3.3.2.10.1.6 Hebrew (Case Insensitive Substring Match) 2.16.840.1.113730.3.3.2.27.1.6 Hungarian (Case Insensitive Substring Match) 2.16.840.1.113730.3.3.2.23.1.6 Icelandic (Case Insensitive Substring Match) 2.16.840.1.113730.3.3.2.24.1.6 Italian (Case Insensitive Substring Match) 2.16.840.1.113730.3.3.2.25.1.6 Italian - Swiss (Case Insensitive Substring Match) 2.16.840.1.113730.3.3.2.26.1.6 Japanese (Case Insensitive Substring Match) 2.16.840.1.113730.3.3.2.28.1.6 Korean (Case Insensitive Substring Match) 2.16.840.1.113730.3.3.2.29.1.6 Latvian, Lettish (Case Insensitive Substring Match) 2.16.840.1.113730.3.3.2.31.1.6 Lithuanian (Case Insensitive Substring Match) 2.16.840.1.113730.3.3.2.30.1.6 Macedonian (Case Insensitive Substring Match) 2.16.840.1.113730.3.3.2.32.1.6 Norwegian (Case Insensitive Substring Match) 2.16.840.1.113730.3.3.2.35.1.6 Norwegian - Bokmul (Case Insensitive Substring Match) 2.16.840.1.113730.3.3.2.36.1.6 Norwegian - Nynorsk (Case Insensitive Substring Match) 2.16.840.1.113730.3.3.2.37.1.6 Polish (Case Insensitive Substring Match) 2.16.840.1.113730.3.3.2.38.1.6 Romanian (Case Insensitive Substring Match) 2.16.840.1.113730.3.3.2.39.1.6 Russian (Case Insensitive Substring Match) 2.16.840.1.113730.3.3.2.40.1.6 Serbian - Cyrillic (Case Insensitive Substring Match) 2.16.840.1.113730.3.3.2.45.1.6 Serbian - Latin (Case Insensitive Substring Match) 2.16.840.1.113730.3.3.2.41.1.6 Slovak (Case Insensitive Substring Match) 2.16.840.1.113730.3.3.2.42.1.6 Slovenian (Case Insensitive Substring Match) 2.16.840.1.113730.3.3.2.43.1.6 Spanish (Case Insensitive Substring Match) 2.16.840.1.113730.3.3.2.15.1.6 Swedish (Case Insensitive Substring Match) 2.16.840.1.113730.3.3.2.46.1.6 Turkish (Case Insensitive Substring Match) 2.16.840.1.113730.3.3.2.47.1.6 Ukrainian (Case Insensitive Substring Match) 2.16.840.1.113730.3.3.2.48.1.6
[ "attribute operator value", "buildingname>=alpha", "Example\\5c2a*Net product line", "\"(manager=*)\"", "\"(cn= example )\"", "cn: example cn;lang-fr: example", "\"(description=*X.500*)\" \"(sn=*nderson)\" \"(givenname=car*)\"", "\"(employeeNumber>=500)\" \"(sn~=suret)\" \"(salary<=150000)\"", "( Boolean-operator(filter)(filter)(filter) ...)", "(!(objectClass=person))", "( Boolean-operator(filter)((Boolean-operator(filter)(filter )))", "(&(ou=Marketing)(!(description=*X.500*)))", "(&(ou=Marketing)(!(description=*X.500*))(|(manager=cn=example,ou=Marketing,dc=example,dc=com)(manager=cn=demo,ou=Marketing,dc=example,dc=com)))", "(&(!(objectClass=person))(cn~=printer3b))", "(MyFirstName :caseIgnoreIA5Match: =fred)", "attr:matchingRule := value" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/Finding_Directory_Entries-LDAP_Search_Filters
Chapter 3. Customizing the boot menu
Chapter 3. Customizing the boot menu This section provides information about what the Boot menu customization is, and how to customize it. Prerequisites: For information about downloading and extracting Boot images, see Extracting Red Hat Enterprise Linux boot images The Boot menu customization involves the following high-level tasks: Complete the prerequisites. Customize the Boot menu. Create a custom Boot image. 3.1. Customizing the boot menu The Boot menu is the menu which appears after you boot your system using an installation image. Normally, this menu allows you to choose between options such as Install Red Hat Enterprise Linux , Boot from local drive or Rescue an installed system . To customize the Boot menu, you can: Customize the default options. Add more options. Change the visual style (color and background). An installation media consists of ISOLINUX and GRUB2 boot loaders. The ISOLINUX boot loader is used on systems with BIOS firmware, and the GRUB2 boot loader is used on systems with UEFI firmware. Both the boot loaders are present on all Red Hat images for AMD64 and Intel 64 systems. Customizing the boot menu options can especially be useful with Kickstart. Kickstart files must be provided to the installer before the installation begins. Normally, this is done by manually editing one of the existing boot options to add the inst.ks= boot option. You can add this option to one of the pre-configured entries, if you edit boot loader configuration files on the media. 3.2. Systems with bios firmware The ISOLINUX boot loader is used on systems with BIOS firmware. Figure 3.1. ISOLINUX Boot Menu The isolinux/isolinux.cfg configuration file on the boot media contains directives for setting the color scheme and the menu structure (entries and submenus). In the configuration file, the default menu entry for Red Hat Enterprise Linux, Test this media & Install Red Hat Enterprise Linux 8 , is defined in the following block: Where: menu label - determines how the entry will be named in the menu. The ^ character determines its keyboard shortcut (the m key). menu default - provides a default selection, even though it is not the first option in the list. kernel - loads the installer kernel. In most cases it should not be changed. append - contains additional kernel options. The initrd= and inst.stage2 options are mandatory; you can add others. For information about the options that are applicable to Anaconda refer to Types of boot options . One of the notable options is inst.ks= , which allows you to specify a location of a Kickstart file. You can place a Kickstart file on the boot ISO image and use the inst.ks= option to specify its location; for example, you can place a kickstart.ks file into the image's root directory and use inst.ks=hd:LABEL=RHEL-8-BaseOS-x86_64:/kickstart.ks . You can also use dracut options which are listed on the dracut.cmdline(7) man page on your system. Important When using a disk label to refer to a certain drive (as seen in the inst.stage2=hd:LABEL=RHEL-8-BaseOS-x86_64 option above), replace all spaces with \x20 . Other important options which are not included in the menu entry definition are: timeout - determines the time for which the boot menu is displayed before the default menu entry is automatically used. The default value is 600 , which means the menu is displayed for 60 seconds. Setting this value to 0 disables the timeout option. Note Setting the timeout to a low value such as 1 is useful when performing a headless installation. This helps to avoid the default timeout to finish. menu begin and menu end - determines a start and end of a submenu block, allowing you to add additional options such as troubleshooting and grouping them in a submenu. A simple submenu with two options (one to continue and one to go back to the main menu) looks similar to the following: The submenu entry definitions are similar to normal menu entries, but grouped between menu begin and menu end statements. The menu exit line in the second option exits the submenu and returns to the main menu. menu background - the menu background can either be a solid color (see menu color below), or an image in a PNG, JPEG or LSS16 format. When using an image, make sure that its dimensions correspond to the resolution set using the set resolution statement. Default dimensions are 640x480. menu color - determines the color of a menu element. The full format is: Most important parts of this command are: element - determines which element the color will apply to. foreground and background - determine the actual colors. The colors are described using an # AARRGGBB notation in hexadecimal format determines opacity: 00 for fully transparent. ff for fully opaque. menu help textfile - creates a menu entry which, when selected, displays a help text file. Additional resources For a complete list of ISOLINUX configuration file options, see the Syslinux Wiki . 3.3. Systems with uefi firmware The GRUB2 boot loader is used on systems with UEFI firmware. The EFI/BOOT/grub.cfg configuration file on the boot media contains a list of preconfigured menu entries and other directives which controls the appearance and the Boot menu functionality. In the configuration file, the default menu entry for Red Hat Enterprise Linux ( Test this media & install Red Hat Enterprise Linux 8 ) is defined in the following block: Where: menuentry - Defines the title of the entry. It is specified in single or double quotes ( ' or " ). You can use the --class option to group menu entries into different classes , which can then be styled differently using GRUB2 themes. Note As shown in the above example, you must enclose each menu entry definition in curly braces ( {} ). linuxefi - Defines the kernel that boots ( /images/pxeboot/vmlinuz in the above example) and the other additional options, if any. You can customize these options to change the behavior of the boot entry. For details about the options that are applicable to Anaconda , see Kickstart boot options . One of the notable options is inst.ks= , which allows you to specify a location of a Kickstart file. You can place a Kickstart file on the boot ISO image and use the inst.ks= option to specify its location; for example, you can place a kickstart.ks file into the image's root directory and use inst.ks=hd:LABEL=RHEL-8-BaseOS-x86_64:/kickstart.ks . You can also use dracut options which are listed on the dracut.cmdline(7) man page on your system. Important When using a disk label to refer to a certain drive (as seen in the inst.stage2=hd:LABEL=RHEL-8-BaseOS-x86_64 option above), replace all spaces with \x20 . initrdefi - location of the initial RAM disk (initrd) image to be loaded. Other options used in the grub.cfg configuration file are: set timeout - determines how long is the boot menu displayed before the default menu entry is automatically used. The default value is 60 , which means the menu is displayed for 60 seconds. Setting this value to -1 disables the timeout completely. Note Setting the timeout to 0 is useful when performing a headless installation, because this setting immediately activates the default boot entry. submenu - A submenu block allows you to create a sub-menu and group some entries under it, instead of displaying them in the main menu. The Troubleshooting submenu in the default configuration contains entries for rescuing an existing system. The title of the entry is in single or double quotes ( ' or " ). The submenu block contains one or more menuentry definitions as described above, and the entire block is enclosed in curly braces ( {} ). For example: set default - Determines the default entry. The entry numbers start from 0 . If you want to make the third entry the default one, use set default=2 and so on. theme - determines the directory which contains GRUB2 theme files. You can use the themes to customize visual aspects of the boot loader - background, fonts, and colors of specific elements. Additional resources For additional information about customizing the boot menu, see GNU GRUB Manual 2.00 . For more general information about GRUB2 , see Managing, monitoring and updating the kernel .
[ "label check menu label Test this ^media & install Red Hat Enterprise Linux 8. menu default kernel vmlinuz append initrd=initrd.img inst.stage2=hd:LABEL=RHEL-8-BaseOS-x86_64 rd.live.check quiet", "menu begin ^Troubleshooting menu title Troubleshooting label rescue menu label ^Rescue a Red Hat Enterprise Linux system kernel vmlinuz append initrd=initrd.img inst.stage2=hd:LABEL=RHEL-8-BaseOS-x86_64 rescue quiet menu separator label returntomain menu label Return to ^main menu menu exit menu end", "menu color element ansi foreground background shadow", "menuentry 'Test this media & install Red Hat Enterprise Linux 8' --class fedora --class gnu-linux --class gnu --class os { linuxefi /images/pxeboot/vmlinuz inst.stage2=hd:LABEL=RHEL-8-BaseOS-x86_64 rd.live.check quiet initrdefi /images/pxeboot/initrd.img }", "submenu 'Submenu title' { menuentry 'Submenu option 1' { linuxefi /images/vmlinuz inst.stage2=hd:LABEL=RHEL-8-BaseOS-x86_64 xdriver=vesa nomodeset quiet initrdefi /images/pxeboot/initrd.img } menuentry 'Submenu option 2' { linuxefi /images/vmlinuz inst.stage2=hd:LABEL=RHEL-8-BaseOS-x86_64 rescue quiet initrdefi /images/initrd.img } }" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/customizing_anaconda/customizing-the-boot-menu_customizing-anaconda
Chapter 9. NUMA
Chapter 9. NUMA Historically, all memory on AMD64 and Intel 64 systems is equally accessible by all CPUs. Known as Uniform Memory Access (UMA), access times are the same no matter which CPU performs the operation. This behavior is no longer the case with recent AMD64 and Intel 64 processors. In Non-Uniform Memory Access (NUMA), system memory is divided across NUMA nodes , which correspond to sockets or to a particular set of CPUs that have identical access latency to the local subset of system memory. This chapter describes memory allocation and NUMA tuning configurations in virtualized environments. 9.1. NUMA Memory Allocation Policies The following policies define how memory is allocated from the nodes in a system: Strict Strict policy means that the allocation will fail if the memory cannot be allocated on the target node. Specifying a NUMA nodeset list without defining a memory mode attribute defaults to strict mode. Interleave Memory pages are allocated across nodes specified by a nodeset, but are allocated in a round-robin fashion. Preferred Memory is allocated from a single preferred memory node. If sufficient memory is not available, memory can be allocated from other nodes. To enable the intended policy, set it as the value of the <memory mode> element of the domain XML file: <numatune> <memory mode=' preferred ' nodeset='0'> </numatune> Important If memory is overcommitted in strict mode and the guest does not have sufficient swap space, the kernel will kill some guest processes to retrieve additional memory. Red Hat recommends using preferred allocation and specifying a single nodeset (for example, nodeset='0') to prevent this situation.
[ "<numatune> <memory mode=' preferred ' nodeset='0'> </numatune>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_tuning_and_optimization_guide/chap-virtualization_tuning_optimization_guide-numa
Chapter 8. Organizing and Grouping Entries
Chapter 8. Organizing and Grouping Entries Entries contained within the directory can be grouped in different ways to simplify the management of user accounts. Red Hat Directory Server supports a variety of methods for grouping entries and sharing attributes between entries. To take full advantage of the features offered by roles and class of service, determine the directory topology when planning the directory deployment. 8.1. Using Groups Similar to the operating system, you can add users to groups in Directory Server. Groups work the other way around as roles. If you are using roles, the DN of the assigned role is stored in the nsRoleDN attribute in the user object. If you use groups, then the DN of the users who are members of this group are stored in member attributes in the group object. If you enabled the memberOf plug-in, then the groups the user is a member of, are additionally stored in memberOf attribute in the user object. With this plug-in enabled, groups additionally have the benefit of roles, that you can list the group memberships of a user, similar as when using roles. Additionally, groups are faster than roles. For further details about using the memberOf plug-in, see Section 8.1.4, "Listing Group Membership in User Entries" . 8.1.1. The Different Types of Groups Creating both static and dynamic groups from the command line is a similar process. A group entry contains the group name, the type of group, and a members attribute. There are several different options for the type of group; these are described in more detail in the Red Hat Directory Server 10 Configuration, Command, and File Reference . The type of group in this case refers to the type of defining member attribute it has: groupOfNames (recommended) is a simple group, that allows any entry to be added. The attribute used to determine members for this is member . groupOfUniqueNames , like groupOfNames , simply lists user DNs as members, but the members must be unique. This prevents users being added more than once as a group member, which is one way of preventing self-referential group memberships. The attribute used to determine members for this is uniqueMember . groupOfURLs uses a list of LDAP URLs to filter and generate its membership list. This object class is required for any dynamic group and can be used in conjunction with groupOfNames and groupOfUniqueNames . groupOfCertificates is similar to groupOfURLs in that it uses an LDAP filter to search for and identify certificates (or, really, certificate names) to identify group members. This is useful for group-based access control, since the group can be given special access permissions. The attribute used to determine members for this is memberCertificate . The following table shows the default attributes for groups: Table 8.1. Dynamic and Static Group Schema Type of Group Group Object Classes Member Attributes Static groupOfNames [a] member groupOfUniqueNames [a] uniqueMember Dynamic groupOfURLs memberURL groupOfCertificates memberCertificate [a] If this object class is used together with one of the dynamic object classes, the group becomes dynamic. The following two examples show a static and a dynamic group entry: Example 8.1. A Static Group Entry A static group entry lists the specific members of the group. For example: Example 8.2. A Dynamic Group Entry A dynamic group uses at least one LDAP URL to identify entries belonging to the group and can specify multiple LDAP URLs or, if used with another group object class like groupOfUniqueNames , can explicitly list some group members along with the dynamic LDAP URL. For example: Note The memberOf plug-in does not support dynamically generated group memberships. If you set the memberURL attribute instead of listing the group members in an attribute, the memberOf plug-in does not add the memberOf attribute to the user objects that match the filter. 8.1.2. Creating a Static Group Directory Server only supports creating static groups using the command line. 8.1.2.1. Creating a Static Group Using the Command Line This section describes how to create the different types of static groups using the command line. For details about the different static groups, see Section 8.1.1, "The Different Types of Groups" . Creating a Static Group with the groupOfNames Object Class The dsidm utility creates static groups in the cn=Groups entry in the specified base DN. For example, to create the static example_group group with the groupOfNames object class in the cn=Groups,dc=example,dc=com entry Creating a Static Group with the groupOfUniqueNames Object Class To create a static group with the groupOfUniqueNames object class, use the ldapmodify utility to add the entry. For example, to create the static example_group group with the groupOfUniqueNames object class in the cn=Groups,dc=example,dc=com entry: 8.1.3. Creating a Dynamic Group Directory Server only supports creating dynamic groups using the command line. 8.1.3.1. Creating a Dynamic Group Using the Command Line This section describes how to create the different types of dynamic groups using the command line. For details about the different dynamic groups, see Section 8.1.1, "The Different Types of Groups" . Creating a Dynamic Group with the groupOfURLs Object Class For example, to create the dynamic example_group group with the groupOfURLs object class in the cn=Groups,dc=example,dc=com entry: Creating a Dynamic Group with the groupOfCertificates Object Class For example, to create the dynamic example_group group with the groupOfCertificates object class in the cn=Groups,dc=example,dc=com entry: 8.1.4. Listing Group Membership in User Entries The entries which belong to a group are defined, in some way, in the group entry itself. This makes it very easy to look at a group and see its members and to manage group membership centrally. However, there is no good way to find out what groups a single user belongs to. There is nothing in a user entry which indicates its memberships, as there are with roles. The MemberOf Plug-in correlates group membership lists to the corresponding user entries. The MemberOf Plug-in analyzes the member attribute in a group entry and automatically writes a corresponding memberOf attribute in the member's entry. (By default, this checks the member attribute, but multiple attribute instances can be used to support multiple different group types.) As membership changes, the plug-in updates the memberOf attributes on the user entries. The MemberOf Plug-in provides a way to view the groups to which a user belongs simply by looking at the entry, including nested group membership. It can be very difficult to backtrack memberships through nested groups, but the MemberOf Plug-in shows memberships for all groups, direct and indirect. The MemberOf Plug-in manages member attributes for static groups, not dynamic groups or circular groups. 8.1.4.1. Considerations When Using the memberOf Plug-in This section describes important considerations when you want to use the memberOf plug-in. Using the memberOf Plug-in in a Replication Topology There are two approaches to manage the memberOf attribute in a replication topology: Enable the memberOf plug-in on all supplier and read-only replica servers in the topology. In this case, you must exclude the memberOf attribute from replication in all replication agreements. For details about about excluding attributes, see Section 15.1.7, "Replicating a Subset of Attributes with Fractional Replication" . Enable the memberOf plug-in only on all supplier servers in the topology. For this: You must disable replication of the memberOf attribute to all write-enabled suppliers in the replication agreement. For details about about excluding attributes, see Section 15.1.7, "Replicating a Subset of Attributes with Fractional Replication" . You must Enable replication of the memberOf attribute to all read-only replicas in their replication agreement. You must not enable the memberOf plug-in on read-only replicas. Using the memberOf plug-in With Distributed Databases As described in Section 2.2.1, "Creating Databases" , you can store sub-trees of your directory in individual databases. By default, the memberOf plug-in only updates user entries which are stored within the same database as the group. To enable the plug-in to also update users in different databases as the group, you must set the memberOfAllBackends parameter to on . See Section 8.1.4.5.2, "Configuring the MemberOf Plug-in on Each Server Using the Web Console" . 8.1.4.2. Required Object Classes by the memberOf Plug-In The memberOf plug-in By default, the memberOf plug-in will add the MemberOf object class to objects to provide the memberOf attribute. This object class is safe to add to any object for this purpose, and no further action is required to enable this plug-in to operate correctly. Alternatively, you can create user objects that contain the inetUser or inetAdmin, object class. Both object classes support the memberOf attribute as well. To configure nested groups, the group must use the extensibleObject object class. Note If directory entries do not contain an object class that supports the required attributes, operations fail with the following error: 8.1.4.3. The MemberOf Plug-in Syntax The MemberOf Plug-in instance defines two attributes, one for the group member attribute to poll ( memberOfGroupAttr ) and the other for the attribute to create and manage in the member's user entry ( memberOfAttr ). The memberOfGroupAttr attribute is multi-valued. Because different types of groups use different member attributes, using multiple memberOfGroupAttr attributes allows the plug-in to manage multiple types of groups. The plug-in instance also gives the plug-in path and function to identify the MemberOf Plug-in and contains a state setting to enable the plug-in, both of which are required for all plug-ins. The default MemberOf Plug-in is shown in Example 8.3, "Default MemberOf Plug-in Entry" . Example 8.3. Default MemberOf Plug-in Entry For details about the parameters used in the example and other parameters you can set, see the MemberOf Plug-in Attributes section in the Red Hat Directory Server Command, Configuration, and File Reference . Note To maintain backwards compatibility with older versions of Directory Server, which only allowed a single member attribute (by default, member ), it may be necessary to include the member group attribute or whatever member attribute was used, in addition any new member attributes used in the plug-in configuration. 8.1.4.4. Enabling the MemberOf Plug-in This section describes how to enable the MemberOf plug-in. 8.1.4.4.1. Enabling the MemberOf Plug-in Using the Command Line Enable the MemberOf plug-in using the command line: Use the dsconf utility to enable the plug-in: Restart the instance: 8.1.4.4.2. Enabling the MemberOf Plug-in Using the Web Console Enable the MemberOf plug-in using the web console: Open the Directory Server user interface in the web console. See Section 1.4, "Logging Into Directory Server Using the Web Console" . Select the instance. Select the Plugins menu. Select the MemberOf plug-in. Change the status to ON to enable the plug-in. Restart the instance. See Section 1.5.2, "Starting and Stopping a Directory Server Instance Using the Web Console" . 8.1.4.5. Configuring the MemberOf Plug-in on Each Server If you do not want to replicate the configuration of the MemberOf plug-in, configure the plug-in manually on each server. 8.1.4.5.1. Configuring the MemberOf Plug-in on Each Server Using the Command Line To configure the MemberOf plug-in using the command line: Enable the plug-in. See Section 8.1.4.4.1, "Enabling the MemberOf Plug-in Using the Command Line" . To retrieve members of a group from a different attribute than member , which is the default, set the memberOfGroupAttr parameter to the respective attribute name. For example, to read group members from uniqueMember attributes, replace the current value of memberOfGroupAttr : Optionally, display the attribute that is currently configured: The command displays that currently only the member attribute is configured to retrieve members of a group. Remove all attributes from the configuration that currently set: Note It is not possible to remove a specific group attribute. Add the uniqueMember attribute to the configuration: To set multiple attributes, pass them all to the --groupattr parameter. For example: By default, the MemberOf plug-in adds the memberOf attribute to user entries. To use a different attribute, set the name of the attribute in the memberOfAttr parameter. For example, to add the customMemberOf attribute to user records, replace the current value of memberOfAttr : Optionally, display the attribute that is currently configured: Configure the MemberOf plug-in to add the customMemberOf attribute to user entries: Note You can only set this parameter to an attribute that supports DN syntax. In an environment that uses distributed databases, you can configure the plug-in to search user entries in all databases instead of only the local database: Restart the instance: 8.1.4.5.2. Configuring the MemberOf Plug-in on Each Server Using the Web Console To configure the MemberOf plug-in using the command line: Open the Directory Server user interface in the web console. See Section 1.4, "Logging Into Directory Server Using the Web Console" . Select the instance. Open the Plugins menu. Select the memberOf plug-in. Change the status to ON to enable the plug-in. Fill the fields to configure the plug-in. For example, to configure that the plug-in adds the customMemberOf attribute to user entries if the uniqueMember attribute is added to a group: Click Save . Restart the instance. See Section 1.5.2, "Starting and Stopping a Directory Server Instance Using the Web Console" . 8.1.4.6. Using the MemberOf Plug-in Shared Configuration By default, the configuration of the MemberOf plug-in is stored on each server. Using the shared configuration feature of the plug-in, the configuration can be stored outside of the cn=config suffix and replicated. Administrators can use the same settings without configuring the plug-in manually on each server. Enable the plug-in. See Section 8.1.4.4, "Enabling the MemberOf Plug-in" . Add the shared configuration entry for the MemberOf plug-in. For example: This automatically enables the shared configuration entry on the server on which you ran the command. Restart the instance: On all other servers in the replication topology that should use the shared configuration, enable the shared configuration: Enable the plug-in. See Section 8.1.4.4, "Enabling the MemberOf Plug-in" . Set the DN that stores the shared configuration. For example: Restart the instance: Important After enabling the shared configuration, the plug-in ignores all parameters set in the cn=MemberOf Plugin,cn=plugins,cn=config plug-in entry and only uses settings from the shared configuration entry. 8.1.4.7. Setting the Scope of the MemberOf Plug-in If you configured several back ends or multiple-nested suffixes, you can use the memberOfEntryScope and memberOfEntryScopeExcludeSubtree parameters to set what suffixes the MemberOf plug-in works on. If you add a user to a group, the MemberOf plug-in only adds the memberOf attribute to the group if both the user and the group are in the plug-in's scope. For example, to configure the MemberOf plug-in to work on all entries in dc=example,dc=com , but to exclude entries in ou=private,dc=example,dc=com : If you moved a user entry out of the scope by using the --scope DN parameter: The membership attribute, such as member , is updated in the group entry to remove the user DN value. The memberOf attribute is updated in the user entry to remove the group DN value. Note The value set in the --exclude parameter has a higher priority than values set in --scope . If the scopes set in both parameters overlap, the MemberOf plug-in only works on the non-overlapping directory entries. 8.1.4.8. Regenerating memberOf Values The MemberOf plug-in automatically manages memberOf attributes on group member entries, based on the configuration in the group entry itself. However, the memberOf attribute can be manually edited in a user entry or new entries can be imported or replicated to the server that have a memberOf attribute already set. These situations create inconsistencies between the memberOf configuration managed by the server plug-in and the actual memberships defined in an entry. For example, to regenerate the memberOf values in dc=example,dc=com entry and subentries: The -f filter option is optional. Use the filter to regenerate the memberOf attributes in user entries matching the filter. If you do not specify a filter, the tasks regenerates the attributes in all entries containing the inetUser , inetAdmin , or nsMemberOf object class. Note Regeneration tasks run locally, even if the entries themselves are replicated. This means that memberOf attributes for entries on other servers are not updated until the updated entry is replicated. 8.1.5. Automatically Adding Entries to Specified Groups Section 8.1.5.1, "Looking at the Structure of an Automembership Rule" Section 8.1.5.4, "Examples of Automembership Rules" Section 8.1.5.2, "Configuring Auto Membership Definitions" Group management can be a critical factor for managing directory data, especially for clients which use Directory Server data and organization or which use groups to apply functionality to entries. Groups make it easier to apply policies consistently and reliably across the directory. Password policies, access control lists, and other rules can all be based on group membership. Being able to assign new entries to groups, automatically, at the time that an account is created ensures that the appropriate policies and functionality are immediately applied to those entries - without requiring administrator intervention. Dynamic groups are one method of creating groups and assigning members automatically because any matching entry is automatically included in the group. For applying Directory Server policies and settings, this is sufficient. However, LDAP applications and clients commonly need a static and explicit list of group members in order to perform whatever operation is required. And all of the members in static groups have to be manually added to those groups. The static group itself cannot search for members like a dynamic group, but there is a way to allow a static group to have members added to it automatically - the Auto Membership Plug-in . Automembership essentially allows a static group to act like a dynamic group. Different automembership definitions create searches that are automatically run on all new directory entries. The automembership rules search for and identify matching entries - much like the dynamic search filters - and then explicitly add those entries as members to the static group. Note By default, the autoMemberProcessModifyOps parameter in the cn=Auto Membership Plugin,cn=plugins,cn=config entry is set to on . With this setting, the Automembership plug-in also updates group memberships when an administrator moves a user to a different group by editing a user entry. If you set autoMemberProcessModifyOps to off , Directory Server invokes the plug-in only when you add a group entry to the user, and you must manually run a fix-up task to update the group membership. The Auto Membership Plug-in can target any type of object stored in the directory: users, machines and network devices, customer data, or other assets. Note The Auto Membership Plug-in adds a new member to an existing group based on defined criteria. It does not create a group for the new entry. To create a corresponding group entry when a new entry of a certain type is created, use the Managed Entries Plug-in. This is covered in Section 8.3, "Automatically Creating Dual Entries" . 8.1.5.1. Looking at the Structure of an Automembership Rule The Auto Membership Plug-in itself is a container entry in cn=plugins,cn=config . Group assignments are defined through child entries. 8.1.5.1.1. The Automembership Configuration Entry Automembership assignments are created through a main definition entry, a child of the Auto Membership Plug-in entry. Each definition entry defines three elements: An LDAP search to identify entries, including both a search scope and a search filter ( autoMemberScope and autoMemberFilter ) A default group to which to add the member entries ( autoMemberDefaultGroup ) The member entry format, which is the attribute in the group entry, such as member , and the attribute value, such as dn ( autoMemberGroupingAttr ) The definition is the basic configuration for an automember rule. It identifies all of the required information: what a matching member entry looks like and a group for that member to belong to. For example, this definition assigns all users with the object class set to ntUser to the cn=windows-users group: For details about the attributes used in the example and other attributes you can set in this entry, see the cn=Auto Membership Plugin,cn=plugins,cn=config entry description in the Red Hat Directory Server Configuration, Command, and File Reference . 8.1.5.1.2. Additional Regular Expression Entries For something like a users group, where more than likely all matching entries should be added as members, a simple definition is sufficient. However, there can be instances where entries that match the LDAP search filter should be added to different groups, depending on the value of some other attribute. For example, machines may need to be added to different groups depending on their IP address or physical location; users may need to be in different groups depending on their employee ID number. The automember definition can use regular expressions to provide additional conditions on what entries to include or exclude from a group, and then a new, specific group to add those selected entries to. For example, an automember definition sets all machines to be added to a generic host group. Example 8.4. Automember Definition for a Host Group A regular expression rule is added so that any machine with a fully-qualified domain name within a given range is added to a web server group. Example 8.5. Regular Expression Condition for a Web Server Group So, any host machine added with a fully-qualified domain name that matches the expression ^www\.web[0-9]+\.example\.com , such as www.web1.example.com , is added to the cn=webservers group, defined for that exact regular expression. Any other machine entry, which matches the LDAP filter objectclass=ipHost but with a different type of fully-qualified domain name, is added to the general host group, cn=systems , defined in the main definition entry. The group in the definition, then, is a fallback for entries which match the general definition, but do not meet the conditions in the regular expression rule. Regular expression rules are child entries of the automember definition. Figure 8.1. Regular Expression Conditions Each rule can include multiple inclusion and exclusion expressions. (Exclusions are evaluated first.) If an entry matches any inclusion rule, it is added to the group. There can be only one target group given for the regular expression rule. Table 8.2. Regular Expression Condition Attributes Attribute Description autoMemberRegexRule (required object class) Identifies the entry as a regular expression rule. This entry must be a child of an automember definition ( objectclass: autoMemberDefinition ). autoMemberInclusiveRegex Sets a regular expression to use to identify entries to include. Only matching entries are added to the group. Multiple regular expressions could be used, and if an entry matches any one of those expressions, it is included in the group. The format of the expression is a Perl-compatible regular expression (PCRE). For more information on PCRE patterns, see the pcresyntax (3) man page. This is a multi-valued attribute. autoMemberExclusiveRegex Sets a regular expression to use to identify entries to exclude. If an entry matches the exclusion condition, then it is not included in the group. Multiple regular expressions could be used, and if an entry matches any one of those expressions, it is excluded in the group. The format of the expression is a Perl-compatible regular expression (PCRE). For more information on PCRE patterns, see the pcresyntax (3) man page. This is a multi-valued attribute. Note Exclude conditions are evaluated first and take precedence over include conditions. autoMemberTargetGroup Sets which group to add the entry to as a member, if it meets the regular expression conditions. 8.1.5.2. Configuring Auto Membership Definitions To use the Auto Membership plug-in, create definitions for the plug-in. 8.1.5.2.1. Configuring Auto Membership Definitions Using the Command Line To create Auto Membership definitions using the command line: Enable the Auto Membership plug-in: Create a Auto Membership definition. For example: Optionally, you can set further parameters in an Auto Membership definition, for example, to use regular expressions to identify entries to include.Use the ldapmodify utility to add or update these parameters in the cn= definition_name ,cn=Auto Membership Plugin,cn=plugins,cn=config entry. For parameters you can set, see cn=Auto Membership Plugin,cn=plugins,cn=config entry description in the Red Hat Directory Server Configuration, Command, and File Reference . Restart the instance: 8.1.5.2.2. Configuring Auto Membership Definitions Using the Web Console To create Auto Membership definitions using the web console: Open the Directory Server user interface in the web console. See Section 1.4, "Logging Into Directory Server Using the Web Console" . Select the instance. Open the Plugins menu. Select the Auto Membership plug-in. Change the status to ON to enable the plug-in. Click Add Definition . Fill the fields. For example: Optionally, add a regular expression filter. Click Save . Restart the instance. See Section 1.5.2, "Starting and Stopping a Directory Server Instance Using the Web Console" . 8.1.5.3. Updating Existing Entries to apply Auto Membership Definitions By default, the autoMemberProcessModifyOps parameter in the cn=Auto Membership Plugin,cn=plugins,cn=config entry is enabled. With this setting, the Automembership plug-in also updates group memberships when an administrator moves a user to a different group by editing a user entry. However, if you set autoMemberProcessModifyOps to off , you must manually run a fix-up task when you added new entries to the directory or changed existing entries. To create the task entry: When the task is completed, the entry is removed from the directory configuration. 8.1.5.4. Examples of Automembership Rules Automembership rules are usually going to applied to users and to machines (although they can be applied to any type of entry). There are a handful of examples that may be useful in planning automembership rules: Different host groups based on IP address Windows user groups Different user groups based on employee ID Example 8.6. Host Groups by IP Address The automember rule first defines the scope and target of the rule. The example in Section 8.1.5.1.2, "Additional Regular Expression Entries" uses the configuration group to define the fallback group and a regular expression entry to sort out matching entries. The scope is used to find all host entries. The plug-in then iterates through the regular expression entries. If an entry matches an inclusive regular expression, then it is added to that host group. If it does not match any group, it is added to the default group. The actual plug-in configuration entries are configured like this, for the definition entry and two regular expression entries to filter hosts into a web servers group or a mail servers group. Example 8.7. Windows User Group The basic users group shown in Section 8.1.5.1.1, "The Automembership Configuration Entry" uses the posixAccount attribute to identify all new users. All new users created within Directory Server are created with the posixAccount attribute, so that is a safe catch-all for new Directory Server users. However, when user accounts are synchronized over from the Windows domain to the Directory Server, the Windows user accounts are created without the posixAccount attribute. Windows users are identified by the ntUser attribute. The basic, all-users group rule can be modified to target Windows users specifically, which can then be added to the default all-users group or to a Windows-specific group. Example 8.8. User Groups by Employee Type The Auto Membership Plug-in can work on custom attributes, which can be useful for entries which are managed by other applications. For example, a human resources application may create and then reference users based on the employee type, in a custom employeeType attribute. Much like Example 8.6, "Host Groups by IP Address" , the user type rule uses two regular expression filters to sort full time and temporary employees, only this example uses an explicit value rather than a true regular expression. For other attributes, it may be more appropriate to use a regular expression, like basing the filter on an employee ID number range. 8.1.5.5. Testing Automembership Definitions Because each instance of the Auto Member Plug-in is a set of related-but-separate entries for the definition and regular expression, it can be difficult to see exactly how users are going to be mapped to groups. This becomes even more difficult when there are multiple rules which target different subsets of users. There are two dry-run tasks which can be useful to determine whether all of the different Auto Member Plug-in definitions are assigning groups properly as designed. Testing with Existing Entries cn=automember export updates runs against existing entries in the directory and exports the results of what users would have been added to what groups, based on the rules. This is useful for testing existing rules against existing users to see how your real deployment are performing. This task requires the same information as the cn=automember rebuild membership task - the base DN to search, search filter, and search scope - and has an additional parameter to specify an export LDIF file to record the proposed entry updates. Testing with an Import LDIF cn=automember map updates takes an import LDIF of new users and then runs the new users against the current automembership rules. This can be very useful for testing a new rule, before applying it to (real) new or existing user entries. This is called a map task because it maps or relates changes for proposed new entries to the existing rules. This task only requires two attributes: the location of the input LDIF (which must contain at least some user entries) and an output LDIF file to which to write the proposed entry updates. Both the input and output LDIF files are absolute paths on the local machine. For example, using ldapmodify : 8.1.5.6. Canceling the Auto Membership Plug-in Task The Auto Membership plug-in task can generate high CPU usage on the server if the Directory Server has complex configuration (large groups, complex rules and interaction with other plugins). To prevent the performance issues, you can cancel the Auto Membership plug-in task. Procedure To cancel the Auto Membership plug-in task enter: Verification To see the list of all Auto Membership plug-in tasks, including canceled tasks, enter:
[ "objectClass: top objectClass: groupOfUniqueNames cn: static group description: Example static group. uniqueMember: uid=mwhite,ou=People,dc=example,dc=com uniqueMember: uid=awhite,ou=People,dc=example,dc=com", "objectClass: top objectClass: groupOfUniqueNames objectClass: groupOfURLs cn: dynamic group description: Example dynamic group. memberURL: ldap:///dc=example,dc=com??sub?(&(objectclass=person)(cn=*sen*))", "dsidm -D \"cn=Directory Manager\" ldap://server.example.com -b \"dc=example,dc=com\" group create --cn \"example_group\"", "ldapmodify -D \"cn=Directory Manager\" -W -p 389 -h server.example.com -x dn: cn=example_group,cn=Groups,dc=example,dc=com changetype: add objectClass: top objectClass: groupOfUniqueNames cn: example_group description: Example static group with unique members", "ldapmodify -D \"cn=Directory Manager\" -W -p 389 -h server.example.com -x dn: cn=example_group,cn=Groups,dc=example,dc=com changetype: add objectClass: top objectClass: groupOfURLs cn: example_group description: Example dynamic group for user entries memberURL: ldap:///dc=example,dc=com??sub?(&(objectclass=person)(cn=*sen*))", "ldapmodify -D \"cn=Directory Manager\" -W -p 389 -h server.example.com -x dn: cn=example_group,cn=Groups,dc=example,dc=com changetype: add objectClass: top objectClass: groupOfURLs cn: example_group description: Example dynamic group for certificate entries memberCertificate:", "LDAP: error code 65 - Object Class Violation", "dn: cn=MemberOf Plugin,cn=plugins,cn=config objectClass: top objectClass: nsSlapdPlugin objectClass: extensibleObject cn: MemberOf Plugin nsslapd-pluginPath: libmemberof-plugin nsslapd-pluginInitfunc: memberof_postop_init nsslapd-pluginType: postoperation nsslapd-pluginEnabled: on nsslapd-plugin-depends-on-type: database memberOfGroupAttr: member memberOfGroupAttr: uniqueMember memberOfAttr: memberOf memberOfAllBackends: on nsslapd-pluginId: memberOf nsslapd-pluginVersion: X.Y.Z nsslapd-pluginVendor: Red Hat, Inc. nsslapd-pluginDescription: memberOf plugin", "memberOfGroupAttr: member memberOfGroupAttr: uniqueMember", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com plugin memberof enable", "dsctl instance_name restart", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com plugin memberof show memberofgroupattr: member", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com plugin memberof set --groupattr delete Successfully changed the cn=MemberOf Plugin,cn=plugins,cn=config", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com plugin memberof set --groupattr uniqueMember successfully added memberOfGroupAttr value \"uniqueMember\"", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com plugin memberof set --groupattr member uniqueMember", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com plugin memberof show memberofattr: memberOf", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com plugin memberof set --attr customMemberOf memberOfAttr set to \"customMemberOf\"", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com plugin memberof set --allbackends on memberOfAllBackends enabled successfully", "dsctl instance_name restart", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com plugin memberof config-entry add \"cn=shared_MemberOf_config,dc=example,dc=com\" --groupattr \"member\" --attr \"memberOf\"", "dsctl instance_name restart", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com plugin memberof set --config-entry cn=shared_MemberOf_config,dc=example,dc=com", "dsctl instance_name restart", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com plugin memberof set --scope \"dc=example,com\" dsconf -D \"cn=Directory Manager\" ldap://server.example.com plugin memberof set --exclude \"dc=group,dc=example,com\"", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com plugin memberof fixup -f \"(|(objectclass=inetuser)(objectclass=inetadmin)(objectclass=nsmemberof))\" \"dc=example,dc=com\" Attempting to add task entry Successfully added task entry", "dn: cn=Windows Users,cn=Auto Membership Plugin,cn=plugins,cn=config objectclass: autoMemberDefinition autoMemberScope: ou=People,dc=example,dc=com autoMemberFilter: objectclass=ntUser autoMemberDefaultGroup: cn=windows-group,cn=groups,dc=example,dc=com autoMemberGroupingAttr: member:dn", "dn: cn=Hostgroups,cn=Auto Membership Plugin,cn=plugins,cn=config objectclass: autoMemberDefinition cn: Hostgroups autoMemberScope: dc=example,dc=com autoMemberFilter: objectclass=ipHost autoMemberDefaultGroup: cn=systems,cn=hostgroups,dc=example,dc=com autoMemberGroupingAttr: member:dn", "dn: cn=webservers,cn=Hostgroups,cn=Auto Membership Plugin,cn=plugins,cn=config objectclass: autoMemberRegexRule description: Group for webservers cn: webservers autoMemberTargetGroup: cn=webservers,cn=hostgroups,dc=example,dc=com autoMemberInclusiveRegex: fqdn=^www\\.web[0-9]+\\.example\\.com", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com plugin automember enable Enabled Auto Membership Plugin", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com plugin automember definition definition_name add --default-group \" cn=windows-group,cn=groups,dc=example,dc=com \" --scope \" ou=People,dc=example,dc=com \" --filter \" objectclass=ntUser \" --grouping-attr \" member:dn \" Automember definition created successfully!", "dsctl instance_name restart", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com plugin automember fixup -f \" filter \" -s scope", "configuration entry dn: cn=Hostgroups,cn=Auto Membership Plugin,cn=plugins,cn=config objectclass: autoMemberDefinition cn: Hostgroups autoMemberScope: dc=example,dc=com autoMemberFilter: objectclass=bootableDevice autoMemberDefaultGroup: cn=orphans,cn=hostgroups,dc=example,dc=com autoMemberGroupingAttr: member:dn regex entry #1 dn: cn=webservers,cn=Hostgroups,cn=Auto Membership Plugin,cn=plugins,cn=config objectclass: autoMemberRegexRule description: Group placement for webservers cn: webservers autoMemberTargetGroup: cn=webservers,cn=hostgroups,dc=example,dc=com autoMemberInclusiveRegex: fqdn=^www[0-9]+\\.example\\.com autoMemberInclusiveRegex: fqdn=^web[0-9]+\\.example\\.com autoMemberExclusiveRegex: fqdn=^www13\\.example\\.com autoMemberExclusiveRegex: fqdn=^web13\\.example\\.com regex entry #2 dn: cn=mailservers,cn=Hostgroups,cn=Auto Membership Plugin,cn=plugins,cn=config objectclass: autoMemberRegexRule description: Group placement for mailservers cn: mailservers autoMemberTargetGroup: cn=mailservers,cn=hostgroups,dc=example,dc=com autoMemberInclusiveRegex: fqdn=^mail[0-9]+\\.example\\.com autoMemberInclusiveRegex: fqdn=^smtp[0-9]+\\.example\\.com autoMemberExclusiveRegex: fqdn=^mail13\\.example\\.com autoMemberExclusiveRegex: fqdn=^smtp13\\.example\\.com", "dn: cn=Windows Users,cn=Auto Membership Plugin,cn=plugins,cn=config objectclass: autoMemberDefinition autoMemberScope: dc=example,dc=com autoMemberFilter: objectclass=ntUser autoMemberDefaultGroup: cn=Windows Users,cn=groups,dc=example,dc=com autoMemberGroupingAttr: member:dn", "configuration entry dn: cn=Employee groups,cn=Auto Membership Plugin,cn=plugins,cn=config objectclass: autoMemberDefinition cn: Hostgroups autoMemberScope: ou=employees,ou=people,dc=example,dc=com autoMemberFilter: objectclass=inetorgperson autoMemberDefaultGroup: cn=general,cn=employee groups,ou=groups,dc=example,dc=com autoMemberGroupingAttr: member:dn regex entry #1 dn: cn=full time,cn=Employee groups,cn=Auto Membership Plugin,cn=plugins,cn=config objectclass: autoMemberRegexRule description: Group for full time employees cn: full time autoMemberTargetGroup: cn=full time,cn=employee groups,ou=groups,dc=example,dc=com autoMemberInclusiveRegex: employeeType=full regex entry #2 dn: cn=temporary,cn=Employee groups,cn=Auto Membership Plugin,cn=plugins,cn=config objectclass: autoMemberRegexRule description: Group placement for interns, contractors, and seasonal employees cn: temporary autoMemberTargetGroup: cn=temporary,cn=employee groups,ou=groups,dc=example,dc=com autoMemberInclusiveRegex: employeeType=intern autoMemberInclusiveRegex: employeeType=contractor autoMemberInclusiveRegex: employeeType=seasonal", "ldapadd -D \"cn=Directory Manager\" -W -p 389 -h server.example.com -x dn: cn= test_export ,cn=automember export updates,cn=tasks,cn=config objectClass: top objectClass: extensibleObject cn: test_export basedn: dc=example,dc=com filter: (uid=*) scope: sub ldif: /tmp/automember-updates.ldif", "ldapadd -D \"cn=Directory Manager\" -W -p 389 -h server.example.com -x dn: cn= test_mapping , cn=automember map updates,cn=tasks,cn=config objectClass: top objectClass: extensibleObject cn: test_mapping ldif_in: /tmp/entries.ldif ldif_out: /tmp/automember-updates.ldif", "dsconf server.example.com plugin automember abort-fixup", "dsconf server.example.com plugin automember fixup-status" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/advanced_entry_management
Chapter 3. Using language support for Apache Camel extension
Chapter 3. Using language support for Apache Camel extension Important The VS Code extensions for Apache Camel are listed as development support. For more information about scope of development support, see Development Support Scope of Coverage for Red Hat Build of Apache Camel . The Visual Studio Code language support extension adds the language support for Apache Camel for XML DSL and Java DSL code. 3.1. About language support for Apache Camel extension This extension provides completion, validation and documentation features for Apache Camel URI elements directly in your Visual Studio Code editor. It works as a client using the Microsoft Language Server Protocol which communicates with Camel Language Server to provide all functionalities. 3.2. Features of language support for Apache Camel extension The important features of the language support extension are listed below: Language service support for Apache Camel URIs. Quick reference documentation when you hover the cursor over a Camel component. Diagnostics for Camel URIs. Navigation for Java and XML langauges. Creating a Camel Route specified with Yaml DSL using Camel CLI. Create a Camel Quarkus project Create a Camel on SpringBoot project Specific Camel Catalog Version Specific Runtime provider for the Camel Catalog 3.3. Requirements Following points must be considered when using the Apache Camel Language Server: Java 17 is currently required to launch the Apache Camel Language Server. The java.home VS Code option is used to use a different version of JDK than the default one installed on the machine. For some features, JBang must be available on a system command line. For an XML DSL files: Use an .xml file extension. Specify the Camel namespace, for reference, see http://camel.apache.org/schema/blueprint or http://camel.apache.org/schema/spring . For a Java DSL files: Use a .java file extension. Specify the Camel package(usually from an imported package), for example, import org.apache.camel.builder.RouteBuilder . To reference the Camel component, use from or to and a string without a space. The string cannot be a variable. For example, from("timer:timerName") works, but from( "timer:timerName") and from(aVariable) do not work. 3.4. Installing Language support for Apache Camel extension You can download the Language support for Apache Camel extension from the VS Code Extension Marketplace and the Open VSX Registry. You can also install the Language Support for Apache Camel extension directly in the Microsoft VS Code. Procedure Open the VS Code editor. In the VS Code editor, select View > Extensions . In the search bar, type Camel . Select the Language Support for Apache Camel option from the search results and then click Install. This installs the language support extension in your editor. 3.5. Using specific Camel catalog version You can use the specific Camel catalog version. Click File > Preferences > Settings > Apache Camel Tooling > Camel catalog version . For Red Hat productized version that contains redhat in its version identifier, the Maven Red Hat repository is automatically added. Note For the first time a version is used, it takes several seconds/minutes to have it available depending on the time to download the dependencies in the background. Limitations The Kamelet catalog used is community supported version only. For the list of supported Kamelets, see link: Supported Kamelets Modeline configuration is based on community only. Not all traits and modeline parameters are supported. Additional resources Language Support for Apache Camel by Red Hat
null
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/tooling_guide/using-vscode-language-support-extension
Chapter 3. Recommendations
Chapter 3. Recommendations The configuration described in this section is not required, but may improve the stability or performance of your deployment. 3.1. General recommendations Take a full backup as soon as deployment is complete, and store the backup in a separate location. Take regular backups thereafter. See Configuring backup and recovery options for details. Avoid running any service that your deployment depends on as a virtual machine in the same RHHI for Virtualization environment. If you must run a required service in the same deployment, carefully plan your deployment to minimize the downtime of the virtual machine running the required service. Ensure that hyperconverged hosts have sufficient entropy. Failures can occur when the value in /proc/sys/kernel/random/entropy_avail is less than 200 . To increase entropy, install the rng-tools package and follow the steps in https://access.redhat.com/solutions/1395493 . Document your environment so that everyone who works with it is aware of its current state and required procedures. 3.2. Security recommendations Do not disable any security features (such as HTTPS, SELinux, and the firewall) on the hosts or virtual machines. Register all hosts and Red Hat Enterprise Linux virtual machines to either the Red Hat Content Delivery Network or Red Hat Satellite in order to receive the latest security updates and errata. Create individual administrator accounts, instead of allowing many people to use the default admin account, for proper activity tracking. Limit access to the hosts and create separate logins. Do not create a single root login for everyone to use. See Managing user accounts in the web console in the Red Hat Enterprise Linux 8 documentation. Do not create untrusted users on hosts. Avoid installing additional packages such as analyzers, compilers, or other components that add unnecessary security risk. 3.3. Host recommendations Standardize the hosts in the same cluster. This includes having consistent hardware models and firmware versions. Mixing different server hardware within the same cluster can result in inconsistent performance from host to host. Configure fencing devices at deployment time. Fencing devices are required for high availability. Use separate hardware switches for fencing traffic. If monitoring and fencing go over the same switch, that switch becomes a single point of failure for high availability. 3.4. Networking recommendations Bond network interfaces, especially on production hosts. Bonding improves the overall availability of service, as well as network bandwidth. See Network Bonding in the Administration Guide. For optimal performance and simplified troubleshooting, use VLANs to separate different traffic types and make the best use of 10 GbE or 40 GbE networks. If the underlying switches support jumbo frames, set the MTU to the maximum size (for example, 9000 ) that the underlying switches support. This setting enables optimal throughput, with higher bandwidth and reduced CPU usage, for most applications. The default MTU is determined by the minimum size supported by the underlying switches. If you have LLDP enabled, you can see the MTU supported by the peer of each host in the NIC's tool tip in the Setup Host Networks window. 1 GbE networks should only be used for management traffic. Use 10 GbE or 40 GbE for virtual machines and Ethernet-based storage. If additional physical interfaces are added to a host for storage use, uncheck VM network so that the VLAN is assigned directly to the physical interface. 3.4.1. Recommended practices for configuring host networks If your network environment is complex, you may need to configure a host network manually before adding the host to Red Hat Virtualization Manager. Red Hat recommends the following practices for configuring a host network: Configure the network with the Web Console. Alternatively, you can use nmtui or nmcli. If a network is not required for a self-hosted engine deployment or for adding a host to the Manager, configure the network in the Administration Portal after adding the host to the Manager. See Creating a New Logical Network in a Data Center or Cluster. Use the following naming conventions: VLAN devices: VLAN_NAME_TYPE_RAW_PLUS_VID_NO_PAD VLAN interfaces: physical_device.VLAN_ID (for example, eth0.23 , eth1.128 , enp3s0.50 ) Bond interfaces: bondnumber (for example, bond0 , bond1 ) VLANs on bond interfaces: bondnumber.VLAN_ID (for example, bond0.50 , bond1.128 ) Use network bonding . Networking teaming is not supported. Use recommended bonding modes: For the bridged network used as the virtual machine logical network ( ovirtmgmt ), see Which bonding modes work when used with a bridge that virtual machine guests or containers connect to? . For any other logical network, any supported bonding mode can be used. Red Hat Virtualization's default bonding mode is (Mode 4) Dynamic Link Aggregation . If your switch does not support Link Aggregation Control Protocol (LACP), use (Mode 1) Active-Backup . See Bonding Modes for details. Configure a VLAN on a physical NIC as in the following example (although nmcli is used, you can use any tool): Configure a VLAN on a bond as in the following example (although nmcli is used, you can use any tool): Do not disable firewalld . Customize the firewall rules in the Administration Portal after adding the host to the Manager. See Configuring Host Firewall Rules . 3.5. Self-hosted engine recommendations Create a separate data center and cluster for the Red Hat Virtualization Manager and other infrastructure-level services, if the environment is large enough to allow it. Although the Manager virtual machine can run on hosts in a regular cluster, separation from production virtual machines helps facilitate backup schedules, performance, availability, and security. A storage domain dedicated to the Manager virtual machine is created during self-hosted engine deployment. Do not use this storage domain for any other virtual machines. All self-hosted engine nodes should have an equal CPU family so that the Manager virtual machine can safely migrate between them. If you intend to have various families, begin the installation with the lowest one. If the Manager virtual machine shuts down or needs to be migrated, there must be enough memory on a self-hosted engine node for the Manager virtual machine to restart on or migrate to it.
[ "nmcli connection add type vlan con-name vlan50 ifname eth0.50 dev eth0 id 50 nmcli con mod vlan50 +ipv4.dns 8.8.8.8 +ipv4.addresses 123.123.0.1/24 +ivp4.gateway 123.123.0.254", "nmcli connection add type bond con-name bond0 ifname bond0 bond.options \"mode=active-backup,miimon=100\" ipv4.method disabled ipv6.method ignore nmcli connection add type ethernet con-name eth0 ifname eth0 master bond0 slave-type bond nmcli connection add type ethernet con-name eth1 ifname eth1 master bond0 slave-type bond nmcli connection add type vlan con-name vlan50 ifname bond0.50 dev bond0 id 50 nmcli con mod vlan50 +ipv4.dns 8.8.8.8 +ipv4.addresses 123.123.0.1/24 +ivp4.gateway 123.123.0.254" ]
https://docs.redhat.com/en/documentation/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/deploying_red_hat_hyperconverged_infrastructure_for_virtualization/rhhi-recommendations
Chapter 14. Log Record Fields
Chapter 14. Log Record Fields The following fields can be present in log records exported by the logging subsystem. Although log records are typically formatted as JSON objects, the same data model can be applied to other encodings. To search these fields from Elasticsearch and Kibana, use the full dotted field name when searching. For example, with an Elasticsearch /_search URL , to look for a Kubernetes pod name, use /_search/q=kubernetes.pod_name:name-of-my-pod . The top level fields may be present in every record.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/logging/cluster-logging-exported-fields
Chapter 4. Accessing the registry
Chapter 4. Accessing the registry Use the following sections for instructions on accessing the registry, including viewing logs and metrics, as well as securing and exposing the registry. You can access the registry directly to invoke podman commands. This allows you to push images to or pull them from the integrated registry directly using operations like podman push or podman pull . To do so, you must be logged in to the registry using the podman login command. The operations you can perform depend on your user permissions, as described in the following sections. 4.1. Prerequisites You must have configured an identity provider (IDP). For pulling images, for example when using the podman pull command, the user must have the registry-viewer role. To add this role, run the following command: USD oc policy add-role-to-user registry-viewer <user_name> For writing or pushing images, for example when using the podman push command: The user must have the registry-editor role. To add this role, run the following command: USD oc policy add-role-to-user registry-editor <user_name> Your cluster must have an existing project where the images can be pushed to. 4.2. Accessing registry directly from the cluster You can access the registry from inside the cluster. Procedure Access the registry from the cluster by using internal routes: Access the node by getting the node's name: USD oc get nodes USD oc debug nodes/<node_name> To enable access to tools such as oc and podman on the node, change your root directory to /host : sh-4.2# chroot /host Log in to the container image registry by using your access token: sh-4.2# oc login -u kubeadmin -p <password_from_install_log> https://api-int.<cluster_name>.<base_domain>:6443 sh-4.2# podman login -u kubeadmin -p USD(oc whoami -t) image-registry.openshift-image-registry.svc:5000 You should see a message confirming login, such as: Login Succeeded! Note You can pass any value for the user name; the token contains all necessary information. Passing a user name that contains colons will result in a login failure. Since the Image Registry Operator creates the route, it will likely be similar to default-route-openshift-image-registry.<cluster_name> . Perform podman pull and podman push operations against your registry: Important You can pull arbitrary images, but if you have the system:registry role added, you can only push images to the registry in your project. In the following examples, use: Component Value <registry_ip> 172.30.124.220 <port> 5000 <project> openshift <image> image <tag> omitted (defaults to latest ) Pull an arbitrary image: sh-4.2# podman pull <name.io>/<image> Tag the new image with the form <registry_ip>:<port>/<project>/<image> . The project name must appear in this pull specification for OpenShift Container Platform to correctly place and later access the image in the registry: sh-4.2# podman tag <name.io>/<image> image-registry.openshift-image-registry.svc:5000/openshift/<image> Note You must have the system:image-builder role for the specified project, which allows the user to write or push an image. Otherwise, the podman push in the step will fail. To test, you can create a new project to push the image. Push the newly tagged image to your registry: sh-4.2# podman push image-registry.openshift-image-registry.svc:5000/openshift/<image> Note When pushing images to the internal registry, the repository name must use the <project>/<name> format. Using multiple project levels in the repository name results in an authentication error. 4.3. Checking the status of the registry pods As a cluster administrator, you can list the image registry pods running in the openshift-image-registry project and check their status. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure List the pods in the openshift-image-registry project and view their status: USD oc get pods -n openshift-image-registry Example output NAME READY STATUS RESTARTS AGE cluster-image-registry-operator-764bd7f846-qqtpb 1/1 Running 0 78m image-registry-79fb4469f6-llrln 1/1 Running 0 77m node-ca-hjksc 1/1 Running 0 73m node-ca-tftj6 1/1 Running 0 77m node-ca-wb6ht 1/1 Running 0 77m node-ca-zvt9q 1/1 Running 0 74m 4.4. Viewing registry logs You can view the logs for the registry by using the oc logs command. Procedure Use the oc logs command with deployments to view the logs for the container image registry: USD oc logs deployments/image-registry -n openshift-image-registry Example output 2015-05-01T19:48:36.300593110Z time="2015-05-01T19:48:36Z" level=info msg="version=v2.0.0+unknown" 2015-05-01T19:48:36.303294724Z time="2015-05-01T19:48:36Z" level=info msg="redis not configured" instance.id=9ed6c43d-23ee-453f-9a4b-031fea646002 2015-05-01T19:48:36.303422845Z time="2015-05-01T19:48:36Z" level=info msg="using inmemory layerinfo cache" instance.id=9ed6c43d-23ee-453f-9a4b-031fea646002 2015-05-01T19:48:36.303433991Z time="2015-05-01T19:48:36Z" level=info msg="Using OpenShift Auth handler" 2015-05-01T19:48:36.303439084Z time="2015-05-01T19:48:36Z" level=info msg="listening on :5000" instance.id=9ed6c43d-23ee-453f-9a4b-031fea646002 4.5. Accessing registry metrics The OpenShift Container Registry provides an endpoint for Prometheus metrics . Prometheus is a stand-alone, open source systems monitoring and alerting toolkit. The metrics are exposed at the /extensions/v2/metrics path of the registry endpoint. Procedure You can access the metrics by running a metrics query using a cluster role. Cluster role Create a cluster role if you do not already have one to access the metrics: USD cat <<EOF | oc create -f - apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: prometheus-scraper rules: - apiGroups: - image.openshift.io resources: - registry/metrics verbs: - get EOF Add this role to a user, run the following command: USD oc adm policy add-cluster-role-to-user prometheus-scraper <username> Metrics query Get the user token. openshift: USD oc whoami -t Run a metrics query in node or inside a pod, for example: USD curl --insecure -s -u <user>:<secret> \ 1 https://image-registry.openshift-image-registry.svc:5000/extensions/v2/metrics | grep imageregistry | head -n 20 Example output # HELP imageregistry_build_info A metric with a constant '1' value labeled by major, minor, git commit & git version from which the image registry was built. # TYPE imageregistry_build_info gauge imageregistry_build_info{gitCommit="9f72191",gitVersion="v3.11.0+9f72191-135-dirty",major="3",minor="11+"} 1 # HELP imageregistry_digest_cache_requests_total Total number of requests without scope to the digest cache. # TYPE imageregistry_digest_cache_requests_total counter imageregistry_digest_cache_requests_total{type="Hit"} 5 imageregistry_digest_cache_requests_total{type="Miss"} 24 # HELP imageregistry_digest_cache_scoped_requests_total Total number of scoped requests to the digest cache. # TYPE imageregistry_digest_cache_scoped_requests_total counter imageregistry_digest_cache_scoped_requests_total{type="Hit"} 33 imageregistry_digest_cache_scoped_requests_total{type="Miss"} 44 # HELP imageregistry_http_in_flight_requests A gauge of requests currently being served by the registry. # TYPE imageregistry_http_in_flight_requests gauge imageregistry_http_in_flight_requests 1 # HELP imageregistry_http_request_duration_seconds A histogram of latencies for requests to the registry. # TYPE imageregistry_http_request_duration_seconds summary imageregistry_http_request_duration_seconds{method="get",quantile="0.5"} 0.01296087 imageregistry_http_request_duration_seconds{method="get",quantile="0.9"} 0.014847248 imageregistry_http_request_duration_seconds{method="get",quantile="0.99"} 0.015981195 imageregistry_http_request_duration_seconds_sum{method="get"} 12.260727916000022 1 The <user> object can be arbitrary, but <secret> tag must use the user token. 4.6. Additional resources For more information on allowing pods in a project to reference images in another project, see Allowing pods to reference images across projects . A kubeadmin can access the registry until deleted. See Removing the kubeadmin user for more information. For more information on configuring an identity provider, see Understanding identity provider configuration .
[ "oc policy add-role-to-user registry-viewer <user_name>", "oc policy add-role-to-user registry-editor <user_name>", "oc get nodes", "oc debug nodes/<node_name>", "sh-4.2# chroot /host", "sh-4.2# oc login -u kubeadmin -p <password_from_install_log> https://api-int.<cluster_name>.<base_domain>:6443", "sh-4.2# podman login -u kubeadmin -p USD(oc whoami -t) image-registry.openshift-image-registry.svc:5000", "Login Succeeded!", "sh-4.2# podman pull <name.io>/<image>", "sh-4.2# podman tag <name.io>/<image> image-registry.openshift-image-registry.svc:5000/openshift/<image>", "sh-4.2# podman push image-registry.openshift-image-registry.svc:5000/openshift/<image>", "oc get pods -n openshift-image-registry", "NAME READY STATUS RESTARTS AGE cluster-image-registry-operator-764bd7f846-qqtpb 1/1 Running 0 78m image-registry-79fb4469f6-llrln 1/1 Running 0 77m node-ca-hjksc 1/1 Running 0 73m node-ca-tftj6 1/1 Running 0 77m node-ca-wb6ht 1/1 Running 0 77m node-ca-zvt9q 1/1 Running 0 74m", "oc logs deployments/image-registry -n openshift-image-registry", "2015-05-01T19:48:36.300593110Z time=\"2015-05-01T19:48:36Z\" level=info msg=\"version=v2.0.0+unknown\" 2015-05-01T19:48:36.303294724Z time=\"2015-05-01T19:48:36Z\" level=info msg=\"redis not configured\" instance.id=9ed6c43d-23ee-453f-9a4b-031fea646002 2015-05-01T19:48:36.303422845Z time=\"2015-05-01T19:48:36Z\" level=info msg=\"using inmemory layerinfo cache\" instance.id=9ed6c43d-23ee-453f-9a4b-031fea646002 2015-05-01T19:48:36.303433991Z time=\"2015-05-01T19:48:36Z\" level=info msg=\"Using OpenShift Auth handler\" 2015-05-01T19:48:36.303439084Z time=\"2015-05-01T19:48:36Z\" level=info msg=\"listening on :5000\" instance.id=9ed6c43d-23ee-453f-9a4b-031fea646002", "cat <<EOF | oc create -f - apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: prometheus-scraper rules: - apiGroups: - image.openshift.io resources: - registry/metrics verbs: - get EOF", "oc adm policy add-cluster-role-to-user prometheus-scraper <username>", "openshift: oc whoami -t", "curl --insecure -s -u <user>:<secret> \\ 1 https://image-registry.openshift-image-registry.svc:5000/extensions/v2/metrics | grep imageregistry | head -n 20", "HELP imageregistry_build_info A metric with a constant '1' value labeled by major, minor, git commit & git version from which the image registry was built. TYPE imageregistry_build_info gauge imageregistry_build_info{gitCommit=\"9f72191\",gitVersion=\"v3.11.0+9f72191-135-dirty\",major=\"3\",minor=\"11+\"} 1 HELP imageregistry_digest_cache_requests_total Total number of requests without scope to the digest cache. TYPE imageregistry_digest_cache_requests_total counter imageregistry_digest_cache_requests_total{type=\"Hit\"} 5 imageregistry_digest_cache_requests_total{type=\"Miss\"} 24 HELP imageregistry_digest_cache_scoped_requests_total Total number of scoped requests to the digest cache. TYPE imageregistry_digest_cache_scoped_requests_total counter imageregistry_digest_cache_scoped_requests_total{type=\"Hit\"} 33 imageregistry_digest_cache_scoped_requests_total{type=\"Miss\"} 44 HELP imageregistry_http_in_flight_requests A gauge of requests currently being served by the registry. TYPE imageregistry_http_in_flight_requests gauge imageregistry_http_in_flight_requests 1 HELP imageregistry_http_request_duration_seconds A histogram of latencies for requests to the registry. TYPE imageregistry_http_request_duration_seconds summary imageregistry_http_request_duration_seconds{method=\"get\",quantile=\"0.5\"} 0.01296087 imageregistry_http_request_duration_seconds{method=\"get\",quantile=\"0.9\"} 0.014847248 imageregistry_http_request_duration_seconds{method=\"get\",quantile=\"0.99\"} 0.015981195 imageregistry_http_request_duration_seconds_sum{method=\"get\"} 12.260727916000022" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/registry/accessing-the-registry
Getting Started Guide
Getting Started Guide Red Hat build of Keycloak 22.0 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/22.0/html/getting_started_guide/index
probe::nfs.proc.rename
probe::nfs.proc.rename Name probe::nfs.proc.rename - NFS client renames a file on server Synopsis nfs.proc.rename Values new_fh file handle of new parent dir new_filelen length of new file name old_name old file name version NFS version (the function is used for all NFS version) old_fh file handle of old parent dir prot transfer protocol new_name new file name old_filelen length of old file name server_ip IP address of server
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-nfs-proc-rename
18.2. Differences between iptables and ipchains
18.2. Differences between iptables and ipchains At first glance, ipchains and iptables appear to be quite similar. Both methods of packet filtering use chains of rules operating within the Linux kernel to decide what to do with packets that match the specified rule or set of rules. However, iptables offers a more extensible way of filtering packets, giving the administrator a greater amount of control without building a great deal of complexity into the system. Specifically, users comfortable with ipchains should be aware of the following significant differences between ipchains and iptables before attempting to use iptables : Under iptables , each filtered packet is processed using rules from only one chain rather than multiple chains. For instance, a FORWARD packet coming into a system using ipchains would have to go through the INPUT, FORWARD, and OUTPUT chains to move along to its destination. However, iptables only sends packets to the INPUT chain if they are destined for the local system and only sends them to the OUTPUT chain if the local system generated the packets. For this reason, it is important to place the rule designed to catch a particular packet within the rule that actually handles the packet. The DENY target has been changed to DROP. In ipchains , packets that matched a rule in a chain could be directed to the DENY target. This target must be changed to DROP under iptables . Order matters when placing options in a rule. With ipchains , the order of the rule options does not matter. The iptables command uses stricter syntax. In iptables commands, the protocol (ICMP, TCP, or UDP) must be specified before the source or destination ports. When specifying network interfaces to be used with a rule, you must only use incoming interfaces ( -i option) with INPUT or FORWARD chains and outgoing interfaces ( -o option) with FORWARD or OUTPUT chains. This is necessary because OUTPUT chains are no longer used by incoming interfaces, and INPUT chains are not seen by packets moving through outgoing interfaces. This is not a comprehensive list of the changes, given that iptables is a fundamentally rewritten network filter. For more specific information, refer to the Linux Packet Filtering HOWTO referenced in Section 18.7, "Additional Resources" .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s1-iptables-differences
Chapter 8. JSR-107 (JCache) API
Chapter 8. JSR-107 (JCache) API Starting with JBoss Data Grid 6.5 an implementation of the JCache 1.0.0 API ( JSR-107 ) is included. JCache specified a standard Java API for caching temporary Java objects in memory. Caching java objects can help get around bottlenecks arising from using data that is expensive to retrieve (i.e. DB or web service), or data that is hard to calculate. Caching these types of objects in memory can help speed up application performance by retrieving the data directly from memory instead of doing an expensive roundtrip or recalculation. This document specifies how to use JCache with JBoss Data Grid's implementation of the new specification, and explains key aspects of the API. Report a bug 8.1. Dependencies The JCache dependencies may either be defined in Maven or added to the classpath; both methods are described below: Option 1: Maven In order to use the JCache implementation the following dependencies need to be added to the Maven pom.xml depending on how it is used: embedded: remote: Option 2: Adding the necessary files to the classpath When not using Maven the necessary jar files must be on the classpath at runtime. Having these available at runtime may either be accomplished by embedding the jar files directly, by specifying them at runtime, or by adding them into the container used to deploy the application. Procedure 8.1. Embedded Mode Download the Red Hat JBoss Data Grid 6.6.1 Library from the Red Hat Customer Portal. Extract the downloaded archive to a local directory. Locate the following files: jboss-datagrid-6.6.1-library/infinispan-embedded-6.4.1.Final-redhat-1.jar jboss-datagrid-6.6.1-library/lib/cache-api-1.0.0.redhat-1.jar Ensure both of the above jar files are on the classpath at runtime. Procedure 8.2. Remote Mode Download the Red Hat JBoss Data Grid 6.6.1 Hot Rod Java Client from the Red Hat Customer Portal. Extract the downloaded archive to a local directory. Locate the following files: jboss-datagrid-6.6.1-remote-java-client/infinispan-remote-6.4.1.Final-redhat-1.jar jboss-datagrid-6.6.1-remote-java-client/cache-api-1.0.0.redhat-1.jar Ensure both of the above jar files are on the classpath at runtime. Report a bug
[ "<dependency> <groupId>org.infinispan</groupId> <artifactId>infinispan-embedded</artifactId> <version>USD{infinispan.version}</version> </dependency> <dependency> <groupId>javax.cache</groupId> <artifactId>cache-api</artifactId> <version>1.0.0.redhat-1</version> </dependency>", "<dependency> <groupId>org.infinispan</groupId> <artifactId>infinispan-remote</artifactId> <version>USD{infinispan.version}</version> </dependency> <dependency> <groupId>javax.cache</groupId> <artifactId>cache-api</artifactId> <version>1.0.0.redhat-1</version> </dependency>" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/developer_guide/chap-jsr-107_jcache_api
Chapter 6. Handling high volumes of messages
Chapter 6. Handling high volumes of messages If your Streams for Apache Kafka deployment needs to handle a high volume of messages, you can use configuration options to optimize for throughput and latency. Producer and consumer configuration can help control the size and frequency of requests to Kafka brokers. For more information on the configuration options, see the following: Apache Kafka configuration documentation for producers Apache Kafka configuration documentation for consumers You can also use the same configuration options with the producers and consumers used by the Kafka Connect runtime source connectors (including MirrorMaker 2) and sink connectors. Source connectors Producers from the Kafka Connect runtime send messages to the Kafka cluster. For MirrorMaker 2, since the source system is Kafka, consumers retrieve messages from a source Kafka cluster. Sink connectors Consumers from the Kafka Connect runtime retrieve messages from the Kafka cluster. For consumers, you might increase the amount of data fetched in a single fetch request to reduce latency. You increase the fetch request size using the fetch.max.bytes and max.partition.fetch.bytes properties. You can also set a maximum limit on the number of messages returned from the consumer buffer using the max.poll.records property. For MirrorMaker 2, configure the fetch.max.bytes , max.partition.fetch.bytes , and max.poll.records values at the source connector level ( consumer.* ), as they relate to the specific consumer that fetches messages from the source. For producers, you might increase the size of the message batches sent in a single produce request. You increase the batch size using the batch.size property. A larger batch size reduces the number of outstanding messages ready to be sent and the size of the backlog in the message queue. Messages being sent to the same partition are batched together. A produce request is sent to the target cluster when the batch size is reached. By increasing the batch size, produce requests are delayed and more messages are added to the batch and sent to brokers at the same time. This can improve throughput when you have just a few topic partitions that handle large numbers of messages. Consider the number and size of the records that the producer handles for a suitable producer batch size. Use linger.ms to add a wait time in milliseconds to delay produce requests when producer load decreases. The delay means that more records can be added to batches if they are under the maximum batch size. Configure the batch.size and linger.ms values at the source connector level ( producer.override.* ), as they relate to the specific producer that sends messages to the target Kafka cluster. For Kafka Connect source connectors, the data streaming pipeline to the target Kafka cluster is as follows: Data streaming pipeline for Kafka Connect source connector external data source (Kafka Connect tasks) source message queue producer buffer target Kafka topic For Kafka Connect sink connectors, the data streaming pipeline to the target external data source is as follows: Data streaming pipeline for Kafka Connect sink connector source Kafka topic (Kafka Connect tasks) sink message queue consumer buffer external data source For MirrorMaker 2, the data mirroring pipeline to the target Kafka cluster is as follows: Data mirroring pipeline for MirrorMaker 2 source Kafka topic (Kafka Connect tasks) source message queue producer buffer target Kafka topic The producer sends messages in its buffer to topics in the target Kafka cluster. While this is happening, Kafka Connect tasks continue to poll the data source to add messages to the source message queue. The size of the producer buffer for the source connector is set using the producer.override.buffer.memory property. Tasks wait for a specified timeout period ( offset.flush.timeout.ms ) before the buffer is flushed. This should be enough time for the sent messages to be acknowledged by the brokers and offset data committed. The source task does not wait for the producer to empty the message queue before committing offsets, except during shutdown. If the producer is unable to keep up with the throughput of messages in the source message queue, buffering is blocked until there is space available in the buffer within a time period bounded by max.block.ms . Any unacknowledged messages still in the buffer are sent during this period. New messages are not added to the buffer until these messages are acknowledged and flushed. You can try the following configuration changes to keep the underlying source message queue of outstanding messages at a manageable size: Increasing the default value in milliseconds of the offset.flush.timeout.ms Ensuring that there are enough CPU and memory resources Increasing the number of tasks that run in parallel by doing the following: Increasing the number of tasks that run in parallel using the tasksMax property Increasing the number of worker nodes that run tasks using the replicas property Consider the number of tasks that can run in parallel according to the available CPU and memory resources and number of worker nodes. You might need to keep adjusting the configuration values until they have the desired effect. 6.1. Configuring Kafka Connect for high-volume messages Kafka Connect fetches data from the source external data system and hands it to the Kafka Connect runtime producers so that it's replicated to the target cluster. The following example shows configuration for Kafka Connect using the KafkaConnect custom resource. Example Kafka Connect configuration for handling high volumes of messages apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster annotations: strimzi.io/use-connector-resources: "true" spec: replicas: 3 config: offset.flush.timeout.ms: 10000 # ... resources: requests: cpu: "1" memory: 2Gi limits: cpu: "2" memory: 2Gi # ... Producer configuration is added for the source connector, which is managed using the KafkaConnector custom resource. Example source connector configuration for handling high volumes of messages apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector labels: strimzi.io/cluster: my-connect-cluster spec: class: org.apache.kafka.connect.file.FileStreamSourceConnector tasksMax: 2 config: producer.override.batch.size: 327680 producer.override.linger.ms: 100 # ... Note FileStreamSourceConnector and FileStreamSinkConnector are provided as example connectors. For information on deploying them as KafkaConnector resources, see Deploying KafkaConnector resources . Consumer configuration is added for the sink connector. Example sink connector configuration for handling high volumes of messages apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-sink-connector labels: strimzi.io/cluster: my-connect-cluster spec: class: org.apache.kafka.connect.file.FileStreamSinkConnector tasksMax: 2 config: consumer.fetch.max.bytes: 52428800 consumer.max.partition.fetch.bytes: 1048576 consumer.max.poll.records: 500 # ... If you are using the Kafka Connect API instead of the KafkaConnector custom resource to manage your connectors, you can add the connector configuration as a JSON object. Example curl request to add source connector configuration for handling high volumes of messages curl -X POST \ http://my-connect-cluster-connect-api:8083/connectors \ -H 'Content-Type: application/json' \ -d '{ "name": "my-source-connector", "config": { "connector.class":"org.apache.kafka.connect.file.FileStreamSourceConnector", "file": "/opt/kafka/LICENSE", "topic":"my-topic", "tasksMax": "4", "type": "source" "producer.override.batch.size": 327680 "producer.override.linger.ms": 100 } }' 6.2. Configuring MirrorMaker 2 for high-volume messages MirrorMaker 2 fetches data from the source cluster and hands it to the Kafka Connect runtime producers so that it's replicated to the target cluster. The following example shows the configuration for MirrorMaker 2 using the KafkaMirrorMaker2 custom resource. Example MirrorMaker 2 configuration for handling high volumes of messages apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: version: 3.9.0 replicas: 1 connectCluster: "my-cluster-target" clusters: - alias: "my-cluster-source" bootstrapServers: my-cluster-source-kafka-bootstrap:9092 - alias: "my-cluster-target" config: offset.flush.timeout.ms: 10000 bootstrapServers: my-cluster-target-kafka-bootstrap:9092 mirrors: - sourceCluster: "my-cluster-source" targetCluster: "my-cluster-target" sourceConnector: tasksMax: 2 config: producer.override.batch.size: 327680 producer.override.linger.ms: 100 consumer.fetch.max.bytes: 52428800 consumer.max.partition.fetch.bytes: 1048576 consumer.max.poll.records: 500 # ... resources: requests: cpu: "1" memory: Gi limits: cpu: "2" memory: 4Gi 6.3. Checking the MirrorMaker 2 message flow If you are using Prometheus and Grafana to monitor your deployment, you can check the MirrorMaker 2 message flow. The example MirrorMaker 2 Grafana dashboards provided with Streams for Apache Kafka show the following metrics related to the flush pipeline. The number of messages in Kafka Connect's outstanding messages queue The available bytes of the producer buffer The offset commit timeout in milliseconds You can use these metrics to gauge whether or not you need to tune your configuration based on the volume of messages. Additional resources Introducing metrics Adding Kafka Connect connectors
[ "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster annotations: strimzi.io/use-connector-resources: \"true\" spec: replicas: 3 config: offset.flush.timeout.ms: 10000 # resources: requests: cpu: \"1\" memory: 2Gi limits: cpu: \"2\" memory: 2Gi #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector labels: strimzi.io/cluster: my-connect-cluster spec: class: org.apache.kafka.connect.file.FileStreamSourceConnector tasksMax: 2 config: producer.override.batch.size: 327680 producer.override.linger.ms: 100 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-sink-connector labels: strimzi.io/cluster: my-connect-cluster spec: class: org.apache.kafka.connect.file.FileStreamSinkConnector tasksMax: 2 config: consumer.fetch.max.bytes: 52428800 consumer.max.partition.fetch.bytes: 1048576 consumer.max.poll.records: 500 #", "curl -X POST http://my-connect-cluster-connect-api:8083/connectors -H 'Content-Type: application/json' -d '{ \"name\": \"my-source-connector\", \"config\": { \"connector.class\":\"org.apache.kafka.connect.file.FileStreamSourceConnector\", \"file\": \"/opt/kafka/LICENSE\", \"topic\":\"my-topic\", \"tasksMax\": \"4\", \"type\": \"source\" \"producer.override.batch.size\": 327680 \"producer.override.linger.ms\": 100 } }'", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: version: 3.9.0 replicas: 1 connectCluster: \"my-cluster-target\" clusters: - alias: \"my-cluster-source\" bootstrapServers: my-cluster-source-kafka-bootstrap:9092 - alias: \"my-cluster-target\" config: offset.flush.timeout.ms: 10000 bootstrapServers: my-cluster-target-kafka-bootstrap:9092 mirrors: - sourceCluster: \"my-cluster-source\" targetCluster: \"my-cluster-target\" sourceConnector: tasksMax: 2 config: producer.override.batch.size: 327680 producer.override.linger.ms: 100 consumer.fetch.max.bytes: 52428800 consumer.max.partition.fetch.bytes: 1048576 consumer.max.poll.records: 500 # resources: requests: cpu: \"1\" memory: Gi limits: cpu: \"2\" memory: 4Gi" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/kafka_configuration_tuning/con-high-volume-config-properties-str
Chapter 39. InternalServiceTemplate schema reference
Chapter 39. InternalServiceTemplate schema reference Used in: CruiseControlTemplate , KafkaBridgeTemplate , KafkaClusterTemplate , KafkaConnectTemplate , ZookeeperClusterTemplate Property Description metadata Metadata applied to the resource. MetadataTemplate ipFamilyPolicy Specifies the IP Family Policy used by the service. Available options are SingleStack , PreferDualStack and RequireDualStack . SingleStack is for a single IP family. PreferDualStack is for two IP families on dual-stack configured clusters or a single IP family on single-stack clusters. RequireDualStack fails unless there are two IP families on dual-stack configured clusters. If unspecified, OpenShift will choose the default value based on the service type. string (one of [RequireDualStack, SingleStack, PreferDualStack]) ipFamilies Specifies the IP Families used by the service. Available options are IPv4 and IPv6 . If unspecified, OpenShift will choose the default value based on the ipFamilyPolicy setting. string (one or more of [IPv6, IPv4]) array
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-InternalServiceTemplate-reference
2.7. Using NetworkManager with sysconfig files
2.7. Using NetworkManager with sysconfig files The /etc/sysconfig/ directory is a location for configuration files and scripts. Most network configuration information is stored there, with the exception of VPN, mobile broadband and PPPoE configuration, which are stored in the /etc/NetworkManager/ subdirectories. For example, interface-specific information is stored in the ifcfg files in the /etc/sysconfig/network-scripts/ directory. For global settings, use the /etc/sysconfig/network file. Information for VPNs, mobile broadband and PPPoE connections is stored in /etc/NetworkManager/system-connections/ . In Red Hat Enterprise Linux 7 if you edit an ifcfg file, NetworkManager is not automatically aware of the change and has to be prompted to notice the change. If you use one of the tools to update NetworkManager profile settings, NetworkManager does not implement those changes until you reconnect using that profile. For example, if configuration files have been changed using an editor, NetworkManager must read the configuration files again. To ensure this, enter as root to reload all connection profiles: Alternatively, to reload only one changed file, ifcfg- ifname : Note that you can specify multiple file names using the above command. Changes made using tools such as nmcli do not require a reload but do require the associated interface to be put down and then up again: ~]# nmcli dev disconnect interface-name ~]# nmcli con up interface-name For more details about nmcli , see Section 3.3, "Configuring IP Networking with nmcli" . NetworkManager does not trigger any of the network scripts, though the network scripts attempt to trigger NetworkManager if it is running when the ifup commands are used. See Section 2.6, "Using NetworkManager with Network Scripts" for the explanation of the network scripts. The ifup script is a generic script which does a few things and then calls interface-specific scripts such as ifup- device_name , ifup-wireless , ifup-ppp , and so on. When a user runs ifup enp1s0 manually: ifup looks for a file called /etc/sysconfig/network-scripts/ifcfg-enp1s0 ; if the ifcfg file exists, ifup looks for the TYPE key in that file to determine which type-specific script to call; ifup calls ifup-wireless or ifup- device_name based on TYPE ; the type-specific scripts do type-specific setup; the type-specific scripts let common functions perform IP -related tasks like DHCP or static setup. On bootup, /etc/init.d/network reads through all the ifcfg files and for each one that has ONBOOT=yes , it checks whether NetworkManager is already starting the DEVICE from that ifcfg file. If NetworkManager is starting that device or has already started it, nothing more is done for that file, and the ONBOOT=yes file is checked. If NetworkManager is not yet starting that device, the initscripts continue with their traditional behavior and call ifup for that ifcfg file. The result is that any ifcfg file that has ONBOOT=yes is expected to be started on system bootup, either by NetworkManager or by the initscripts. This ensures that some legacy network types which NetworkManager does not handle (such as ISDN or analog dial-up modems) as well as any new application not yet supported by NetworkManager are still correctly started by the initscripts even though NetworkManager is unable to handle them. Important It is recommended to not store the backup files anywhere within the /etc directory, or in the same location as the live files, because the script literally does ifcfg-* . Only these extensions are excluded: .old , .orig , .rpmnew , .rpmorig , and .rpmsave . For more information on using sysconfig files, see Section 3.5, "Configuring IP Networking with ifcfg Files" and the ifcfg (8) man page.
[ "~]# nmcli connection reload", "~]# nmcli con load /etc/sysconfig/network-scripts/ifcfg- ifname" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/networking_guide/sec-Using_NetworkManager_with_sysconfig_Files
Chapter 4. Container Analysis Tools
Chapter 4. Container Analysis Tools This section describes tools for the analysis of containers. 4.1. Atomic Command If you use the atomic command, you'll know which layers went into your images, and if any of those layers have been updated, you will know that you should rebuild your image.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_atomic_host/7/html/container_security_guide/container_analysis_tools
Chapter 2. Installation
Chapter 2. Installation This chapter describes in detail how to get access to the content set, install Red Hat Software Collections 3.3 on the system, and rebuild Red Hat Software Collections. 2.1. Getting Access to Red Hat Software Collections The Red Hat Software Collections content set is available to customers with Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7 subscriptions listed at https://access.redhat.com/solutions/472793 . For information on how to register your system with Red Hat Subscription Management (RHSM), see Using and Configuring Red Hat Subscription Manager . For detailed instructions on how to enable Red Hat Software Collections using RHSM, see Section 2.1.1, "Using Red Hat Subscription Management" . Since Red Hat Software Collections 2.2, the Red Hat Software Collections and Red Hat Developer Toolset content is available also in the ISO format at https://access.redhat.com/downloads , specifically for Server and Workstation . Note that packages that require the Optional channel, which are listed in Section 2.1.2, "Packages from the Optional Channel" , cannot be installed from the ISO image. Note Packages that require the Optional channel cannot be installed from the ISO image. A list of packages that require enabling of the Optional channel is provided in Section 2.1.2, "Packages from the Optional Channel" . Beta content is unavailable in the ISO format. 2.1.1. Using Red Hat Subscription Management If your system is registered with Red Hat Subscription Management, complete the following steps to attach the subscription that provides access to the repository for Red Hat Software Collections and enable the repository: Display a list of all subscriptions that are available for your system and determine the pool ID of a subscription that provides Red Hat Software Collections. To do so, type the following at a shell prompt as root : subscription-manager list --available For each available subscription, this command displays its name, unique identifier, expiration date, and other details related to it. The pool ID is listed on a line beginning with Pool Id . Attach the appropriate subscription to your system by running the following command as root : subscription-manager attach --pool= pool_id Replace pool_id with the pool ID you determined in the step. To verify the list of subscriptions your system has currently attached, type as root : subscription-manager list --consumed Display the list of available Yum list repositories to retrieve repository metadata and determine the exact name of the Red Hat Software Collections repositories. As root , type: subscription-manager repos --list Or alternatively, run yum repolist all for a brief list. The repository names depend on the specific version of Red Hat Enterprise Linux you are using and are in the following format: Replace variant with the Red Hat Enterprise Linux system variant, that is, server or workstation . Note that Red Hat Software Collections is supported neither on the Client nor on the ComputeNode variant. Enable the appropriate repository by running the following command as root : subscription-manager repos --enable repository Once the subscription is attached to the system, you can install Red Hat Software Collections as described in Section 2.2, "Installing Red Hat Software Collections" . For more information on how to register your system using Red Hat Subscription Management and associate it with subscriptions, see Using and Configuring Red Hat Subscription Manager . Note Subscription through RHN is no longer available. 2.1.2. Packages from the Optional Channel Some of the Red Hat Software Collections packages require the Optional channel to be enabled in order to complete the full installation of these packages. For detailed instructions on how to subscribe your system to this channel, see the relevant Knowledgebase article at https://access.redhat.com/solutions/392003 . Packages from Software Collections for Red Hat Enterprise Linux that require the Optional channel to be enabled are listed in the tables below. Note that packages from the Optional channel are unsupported. For details, see the Knowledgebase article at https://access.redhat.com/articles/1150793 . Table 2.1. Packages That Require Enabling of the Optional Channel in Red Hat Enterprise Linux 7 Package from a Software Collection Required Package from the Optional Channel devtoolset-7-dyninst-testsuite glibc-static devtoolset-7-gcc-plugin-devel libmpc-devel devtoolset-8-dyninst-testsuite glibc-static devtoolset-8-gcc-plugin-devel libmpc-devel httpd24-mod_ldap apr-util-ldap httpd24-mod_session apr-util-openssl python27-python-debug tix python27-python-tools tix python27-tkinter tix rh-git218-git-all cvsps, subversion-perl rh-git218-git-cvs cvsps rh-git218-git-svn subversion-perl rh-git218-perl-Git-SVN subversion-perl rh-jmc hyphen, hyphen-en rh-jmc-jmc hyphen, hyphen-en rh-maven35-xpp3-javadoc java-11-openjdk-javadoc Table 2.2. Packages That Require Enabling of the Optional Channel in Red Hat Enterprise Linux 6 Package from a Software Collection Required Package from the Optional Channel devtoolset-7-dyninst-testsuite glibc-static devtoolset-8-dyninst-testsuite glibc-static devtoolset-8-elfutils-devel xz-devel devtoolset-8-elfutils-devel xz-devel devtoolset-8-gcc-plugin-devel gmp-devel, mpfr-devel devtoolset-8-libgccjit mpfr libyaml-devel libyaml-devel libyaml-devel libyaml-devel rh-mariadb101-boost-devel libicu-devel rh-mariadb101-boost-examples libicu-devel rh-mariadb101-boost-static libicu-devel rh-mariadb101-mariadb-devel libcom_err-devel rh-mariadb102-mariadb-devel libcom_err-devel rh-mongodb32-boost-devel libicu-devel rh-mongodb32-boost-examples libicu-devel rh-mongodb32-boost-static libicu-devel rh-mongodb32-golang-github-10gen-openssl-devel libcom_err-devel rh-mongodb32-golang-github-10gen-openssl-unit-test libcom_err-devel rh-mongodb32-mongo-tools-devel libcom_err-devel rh-mongodb32-mongo-tools-unit-test libcom_err-devel rh-mongodb32-yaml-cpp-devel libicu-devel rh-mongodb34-boost-devel libicu-devel rh-mongodb34-boost-examples libicu-devel rh-mongodb34-boost-static libicu-devel rh-mongodb34-yaml-cpp-devel libicu-devel rh-mysql57-mysql-devel libcom_err-devel rh-mysql57-mysql-test perl-JSON rh-nodejs6 libcom_err-devel rh-nodejs6-node-gyp libcom_err-devel rh-nodejs6-nodejs-devel libcom_err-devel rh-nodejs6-npm libcom_err-devel rh-perl524-mod_perl systemtap-sdt-devel rh-perl524-mod_perl-devel systemtap-sdt-devel rh-perl524-perl-App-cpanminus systemtap-sdt-devel rh-perl524-perl-core systemtap-sdt-devel rh-perl524-perl-CPAN systemtap-sdt-devel rh-perl524-perl-devel systemtap-sdt-devel rh-perl524-perl-Encode-devel systemtap-sdt-devel rh-perl524-perl-ExtUtils-CBuilder systemtap-sdt-devel rh-perl524-perl-ExtUtils-Embed systemtap-sdt-devel rh-perl524-perl-ExtUtils-Install systemtap-sdt-devel rh-perl524-perl-ExtUtils-MakeMaker systemtap-sdt-devel rh-perl524-perl-ExtUtils-MakeMaker-CPANfile systemtap-sdt-devel rh-perl524-perl-ExtUtils-Miniperl systemtap-sdt-devel rh-perl524-perl-inc-latest systemtap-sdt-devel rh-perl524-perl-libnetcfg systemtap-sdt-devel rh-perl524-perl-Module-Build systemtap-sdt-devel rh-perl524-perl-tests systemtap-sdt-devel rh-php70-php-imap libc-client rh-php70-php-recode recode rh-php70-php-tidy libtidy 2.2. Installing Red Hat Software Collections Red Hat Software Collections is distributed as a collection of RPM packages that can be installed, updated, and uninstalled by using the standard package management tools included in Red Hat Enterprise Linux. Note that a valid subscription is required to install Red Hat Software Collections on your system. For detailed instructions on how to associate your system with an appropriate subscription and get access to Red Hat Software Collections, see Section 2.1, "Getting Access to Red Hat Software Collections" . Use of Red Hat Software Collections 3.3 requires the removal of any earlier pre-release versions, including Beta releases. If you have installed any version of Red Hat Software Collections 3.3, uninstall it from your system and install the new version as described in the Section 2.3, "Uninstalling Red Hat Software Collections" and Section 2.2.1, "Installing Individual Software Collections" sections. The in-place upgrade from Red Hat Enterprise Linux 6 to Red Hat Enterprise Linux 7 is not supported by Red Hat Software Collections. As a consequence, the installed Software Collections might not work correctly after the upgrade. If you want to upgrade from Red Hat Enterprise Linux 6 to Red Hat Enterprise Linux 7, it is strongly recommended to remove all Red Hat Software Collections packages, perform the in-place upgrade, update the Red Hat Software Collections repository, and install the Software Collections packages again. It is advisable to back up all data before upgrading. 2.2.1. Installing Individual Software Collections To install any of the Software Collections that are listed in Table 1.1, "Red Hat Software Collections 3.3 Components" , install the corresponding meta package by typing the following at a shell prompt as root : yum install software_collection ... Replace software_collection with a space-separated list of Software Collections you want to install. For example, to install php54 and rh-mariadb100 , type as root : This installs the main meta package for the selected Software Collection and a set of required packages as its dependencies. For information on how to install additional packages such as additional modules, see Section 2.2.2, "Installing Optional Packages" . 2.2.2. Installing Optional Packages Each component of Red Hat Software Collections is distributed with a number of optional packages that are not installed by default. To list all packages that are part of a certain Software Collection but are not installed on your system, type the following at a shell prompt: yum list available software_collection -\* To install any of these optional packages, type as root : yum install package_name ... Replace package_name with a space-separated list of packages that you want to install. For example, to install the rh-perl526-perl-CPAN and rh-perl526-perl-Archive-Tar , type: 2.2.3. Installing Debugging Information To install debugging information for any of the Red Hat Software Collections packages, make sure that the yum-utils package is installed and type the following command as root : debuginfo-install package_name For example, to install debugging information for the rh-ruby25-ruby package, type: Note that you need to have access to the repository with these packages. If your system is registered with Red Hat Subscription Management, enable the rhel- variant -rhscl-6-debug-rpms or rhel- variant -rhscl-7-debug-rpms repository as described in Section 2.1.1, "Using Red Hat Subscription Management" . For more information on how to get access to debuginfo packages, see https://access.redhat.com/solutions/9907 . 2.3. Uninstalling Red Hat Software Collections To uninstall any of the Software Collections components, type the following at a shell prompt as root : yum remove software_collection \* Replace software_collection with the Software Collection component you want to uninstall. Note that uninstallation of the packages provided by Red Hat Software Collections does not affect the Red Hat Enterprise Linux system versions of these tools. 2.4. Rebuilding Red Hat Software Collections <collection>-build packages are not provided by default. If you wish to rebuild a collection and do not want or cannot use the rpmbuild --define 'scl foo' command, you first need to rebuild the metapackage, which provides the <collection>-build package. Note that existing collections should not be rebuilt with different content. To add new packages into an existing collection, you need to create a new collection containing the new packages and make it dependent on packages from the original collection. The original collection has to be used without changes. For detailed information on building Software Collections, refer to the Red Hat Software Collections Packaging Guide .
[ "rhel- variant -rhscl-6-rpms rhel- variant -rhscl-6-debug-rpms rhel- variant -rhscl-6-source-rpms rhel-server-rhscl-6-eus-rpms rhel-server-rhscl-6-eus-source-rpms rhel-server-rhscl-6-eus-debug-rpms rhel- variant -rhscl-7-rpms rhel- variant -rhscl-7-debug-rpms rhel- variant -rhscl-7-source-rpms rhel-server-rhscl-7-eus-rpms rhel-server-rhscl-7-eus-source-rpms rhel-server-rhscl-7-eus-debug-rpms>", "~]# yum install rh-php72 rh-mariadb102", "~]# yum install rh-perl526-perl-CPAN rh-perl526-perl-Archive-Tar", "~]# debuginfo-install rh-ruby25-ruby" ]
https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/3.3_release_notes/chap-Installation
13.2. Setting up the Job Scheduler
13.2. Setting up the Job Scheduler The Certificate Manager can execute a job only if the Job Scheduler is enabled. The job settings, such as enabling the job schedule, setting the frequency, and enabling the job modules, can be done through the Certificate System CA Console or through editing the CS.cfg file. To turn the Job Scheduler on: Open the Certificate Manager Console. In the Configuration tab navigation tree, click Job Scheduler . This opens the General Settings tab, which shows whether the Job Scheduler is currently enabled. Click the Enable Jobs Schedule checkbox to enable or disable the Job Scheduler. Disabling the Job Scheduler turns off all the jobs. Set the frequency which the scheduler checks for jobs in the Check Frequency field. The frequency is how often the Job Scheduler daemon thread wakes up and calls the configured jobs that meet the cron specification. By default, it is set to one minute. Note The window for entering this information may be too small to see the input. Drag the corners of the Certificate Manager Console to enlarge the entire window. Click Save . Note pkiconsole is being deprecated.
[ "pkiconsole https://server.example.com:8443/ca" ]
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/Setting_up_the_Job_Scheduler
14.5.9. Setting Network Interface Bandwidth Parameters
14.5.9. Setting Network Interface Bandwidth Parameters domiftune sets the guest virtual machine's network interface bandwidth parameters. The following format should be used: The only required parameter is the domain name and interface device of the guest virtual machine, the --config , --live , and --current functions the same as in Section 14.19, "Setting Schedule Parameters" . If no limit is specified, it will query current network interface setting. Otherwise, alter the limits with the following options: <interface-device> This is mandatory and it will set or query the domain's network interface's bandwidth parameters. interface-device can be the interface's target name (<target dev='name'/>), or the MAC address. If no --inbound or --outbound is specified, this command will query and show the bandwidth settings. Otherwise, it will set the inbound or outbound bandwidth. average,peak,burst is the same as in attach-interface command. Refer to Section 14.3, "Attaching Interface Devices"
[ "virsh domiftune domain interface-device [[--config] [--live] | [--current]] [--inbound average,peak,burst] [--outbound average,peak,burst]" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sect-Domain_Commands-Setting_network_interface_bandwidth_parameters
4.7. Kernel
4.7. Kernel Kernel Media support The following features are presented as Technology Previews: The latest upstream video4linux Digital video broadcasting Primarily infrared remote control device support Various webcam support fixes and improvements Package: kernel-2.6.32-431 Linux (NameSpace) Container [LXC] Linux containers provide a flexible approach to application runtime containment on bare-metal systems without the need to fully virtualize the workload. Red Hat Enterprise Linux 6 provides application level containers to separate and control the application resource usage policies via cgroups and namespaces. This release includes basic management of container life-cycle by allowing creation, editing and deletion of containers via the libvirt API and the virt-manager GUI. Linux Containers are a Technology Preview. Packages: libvirt-0.9.10-21 , virt-manager-0.9.0-14 Diagnostic pulse for the fence_ipmilan agent, BZ# 655764 A diagnostic pulse can now be issued on the IPMI interface using the fence_ipmilan agent. This new Technology Preview is used to force a kernel dump of a host if the host is configured to do so. Note that this feature is not a substitute for the off operation in a production cluster. Package: fence-agents-3.1.5-35
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/kernel_tp
Chapter 8. Deleting a ROSA with HCP cluster
Chapter 8. Deleting a ROSA with HCP cluster If you want to delete a Red Hat OpenShift Service on AWS (ROSA) with hosted control planes (HCP) cluster, you can use either the Red Hat OpenShift Cluster Manager or the ROSA command line interface (CLI) ( rosa ). After deleting your cluster, you can also delete the AWS Identity and Access Management (IAM) resources that are used by the cluster. 8.1. Deleting a ROSA with HCP cluster and the cluster-specific IAM resources You can delete a ROSA with HCP cluster by using the ROSA command line interface (CLI) ( rosa ) or Red Hat OpenShift Cluster Manager. After deleting the cluster, you can clean up the cluster-specific Identity and Access Management (IAM) resources in your AWS account by using the ROSA CLI. The cluster-specific resources include the Operator roles and the OpenID Connect (OIDC) provider. Note The cluster deletion must complete before you remove the IAM resources, because the resources are used in the cluster deletion and clean up processes. If add-ons are installed, the cluster deletion takes longer because add-ons are uninstalled before the cluster is deleted. The amount of time depends on the number and size of the add-ons. Prerequisites You have installed a ROSA with HCP cluster. You have installed and configured the latest ROSA CLI ( rosa ) on your installation host. Procedure Get the cluster ID, the Amazon Resource Names (ARNs) for the cluster-specific Operator roles, and the endpoint URL for the OIDC provider by running the following command: USD rosa describe cluster --cluster=<cluster_name> Example output Name: test_cluster Domain Prefix: test_cluster Display Name: test_cluster ID: <cluster_id> 1 External ID: <external_id> Control Plane: ROSA Service Hosted OpenShift Version: 4.18.0 Channel Group: stable DNS: test_cluster.l3cn.p3.openshiftapps.com AWS Account: <AWS_id> AWS Billing Account: <AWS_id> API URL: https://api.test_cluster.l3cn.p3.openshiftapps.com:443 Console URL: Region: us-east-1 Availability: - Control Plane: MultiAZ - Data Plane: SingleAZ Nodes: - Compute (desired): 2 - Compute (current): 0 Network: - Type: OVNKubernetes - Service CIDR: 172.30.0.0/16 - Machine CIDR: 10.0.0.0/16 - Pod CIDR: 10.128.0.0/14 - Host Prefix: /23 - Subnets: <subnet_ids> EC2 Metadata Http Tokens: optional Role (STS) ARN: arn:aws:iam::<AWS_id>:role/test_cluster-HCP-ROSA-Installer-Role Support Role ARN: arn:aws:iam::<AWS_id>:role/test_cluster-HCP-ROSA-Support-Role Instance IAM Roles: - Worker: arn:aws:iam::<AWS_id>:role/test_cluster-HCP-ROSA-Worker-Role Operator IAM Roles: 2 - arn:aws:iam::<AWS_id>:role/test_cluster-openshift-cloud-network-config-controller-cloud-crede - arn:aws:iam::<AWS_id>:role/test_cluster-openshift-image-registry-installer-cloud-credentials - arn:aws:iam::<AWS_id>:role/test_cluster-openshift-ingress-operator-cloud-credentials - arn:aws:iam::<AWS_id>:role/test_cluster-kube-system-kube-controller-manager - arn:aws:iam::<AWS_id>:role/test_cluster-kube-system-capa-controller-manager - arn:aws:iam::<AWS_id>:role/test_cluster-kube-system-control-plane-operator - arn:aws:iam::<AWS_id>:role/hcpcluster-kube-system-kms-provider - arn:aws:iam::<AWS_id>:role/test_cluster-openshift-cluster-csi-drivers-ebs-cloud-credentials Managed Policies: Yes State: ready Private: No Created: Apr 16 2024 20:32:06 UTC User Workload Monitoring: Enabled Details Page: https://console.redhat.com/openshift/details/s/<cluster_id> OIDC Endpoint URL: https://oidc.op1.openshiftapps.com/<cluster_id> (Managed) 3 Audit Log Forwarding: Disabled External Authentication: Disabled 1 Lists the cluster ID. 2 Specifies the ARNs for the cluster-specific Operator roles. For example, in the sample output the ARN for the role required by the Machine Config Operator is arn:aws:iam::<aws_account_id>:role/mycluster-x4q9-openshift-machine-api-aws-cloud-credentials . 3 Displays the endpoint URL for the cluster-specific OIDC provider. Important After the cluster is deleted, you need the cluster ID to delete the cluster-specific STS resources using the ROSA CLI. Delete the cluster by using either the OpenShift Cluster Manager or the ROSA CLI ( rosa ): To delete the cluster by using the OpenShift Cluster Manager: Navigate to the OpenShift Cluster Manager . Click the Options menu to your cluster and select Delete cluster . Type the name of your cluster into the prompt and click Delete . To delete the cluster using the ROSA CLI: Run the following command, replacing <cluster_name> with the name or ID of your cluster: USD rosa delete cluster --cluster=<cluster_name> --watch Important You must wait for cluster deletion to complete before you remove the Operator roles and the OIDC provider. Delete the cluster-specific Operator IAM roles by running the following command: USD rosa delete operator-roles --prefix <operator_role_prefix> Delete the OIDC provider by running the following command: USD rosa delete oidc-provider --oidc-config-id <oidc_config_id> Troubleshooting If the cluster cannot be deleted because of missing IAM roles, see Repairing a cluster that cannot be deleted . Ensure that there are no add-ons for your cluster pending in the Hybrid Cloud Console . Ensure that all AWS resources and dependencies have been deleted in the Amazon Web Console. 8.2. Deleting the account-wide IAM resources After you have deleted all Red Hat OpenShift Service on AWS (ROSA) with hosted control planes (HCP) clusters that depend on the account-wide AWS Identity and Access Management (IAM) resources, you can delete the account-wide resources. If you no longer need to install a ROSA with HCP cluster by using Red Hat OpenShift Cluster Manager, you can also delete the OpenShift Cluster Manager and user IAM roles. Important The account-wide IAM roles and policies might be used by other ROSA with HCP clusters in the same AWS account. Only remove the resources if they are not required by other clusters. The OpenShift Cluster Manager and user IAM roles are required if you want to install, manage, and delete other Red Hat OpenShift Service on AWS clusters in the same AWS account by using OpenShift Cluster Manager. Only remove the roles if you no longer need to install Red Hat OpenShift Service on AWS clusters in your account by using OpenShift Cluster Manager. For more information about repairing your cluster if these roles are removed before deletion, see "Repairing a cluster that cannot be deleted" in Troubleshooting cluster deployments . Additional resources Repairing a cluster that cannot be deleted 8.2.1. Deleting the account-wide IAM roles and policies This section provides steps to delete the account-wide IAM roles and policies that you created for ROSA with HCP deployments, along with the account-wide Operator policies. You can delete the account-wide AWS Identity and Access Management (IAM) roles and policies only after deleting all of the ROSA with HCP clusters that depend on them. Important The account-wide IAM roles and policies might be used by other Red Hat OpenShift Service on AWS in the same AWS account. Only remove the roles if they are not required by other clusters. Prerequisites You have account-wide IAM roles that you want to delete. You have installed and configured the latest ROSA CLI ( rosa ) on your installation host. Procedure Delete the account-wide roles: List the account-wide roles in your AWS account by using the ROSA CLI ( rosa ): USD rosa list account-roles Example output I: Fetching account roles ROLE NAME ROLE TYPE ROLE ARN OPENSHIFT VERSION AWS Managed ManagedOpenShift-HCP-ROSA-Installer-Role Installer arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-HCP-ROSA-Installer-Role 4.18 Yes ManagedOpenShift-HCP-ROSA-Support-Role Support arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-HCP-ROSA-Support-Role 4.18 Yes ManagedOpenShift-HCP-ROSA-Worker-Role Worker arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-HCP-ROSA-Worker-Role 4.18 Yes Delete the account-wide roles: USD rosa delete account-roles --prefix <prefix> --mode auto 1 1 You must include the --<prefix> argument. Replace <prefix> with the prefix of the account-wide roles to delete. If you did not specify a custom prefix when you created the account-wide roles, specify the default prefix, ManagedOpenShift . Important The account-wide IAM roles might be used by other ROSA clusters in the same AWS account. Only remove the roles if they are not required by other clusters. Example output W: There are no classic account roles to be deleted I: Deleting hosted CP account roles ? Delete the account role 'delete-rosa-HCP-ROSA-Installer-Role'? Yes I: Deleting account role 'delete-rosa-HCP-ROSA-Installer-Role' ? Delete the account role 'delete-rosa-HCP-ROSA-Support-Role'? Yes I: Deleting account role 'delete-rosa-HCP-ROSA-Support-Role' ? Delete the account role 'delete-rosa-HCP-ROSA-Worker-Role'? Yes I: Deleting account role 'delete-rosa-HCP-ROSA-Worker-Role' I: Successfully deleted the hosted CP account roles Delete the account-wide in-line and Operator policies: Under the Policies page in the AWS IAM Console , filter the list of policies by the prefix that you specified when you created the account-wide roles and policies. Note If you did not specify a custom prefix when you created the account-wide roles, search for the default prefix, ManagedOpenShift . Delete the account-wide in-line policies and Operator policies by using the AWS IAM Console . For more information about deleting IAM policies by using the AWS IAM Console, see Deleting IAM policies in the AWS documentation. Important The account-wide in-line and Operator IAM policies might be used by other ROSA with HCP in the same AWS account. Only remove the roles if they are not required by other clusters. Additional resources About IAM resources 8.2.2. Unlinking and deleting the OpenShift Cluster Manager and user IAM roles When you install a ROSA with HCP cluster by using Red Hat OpenShift Cluster Manager, you also create OpenShift Cluster Manager and user Identity and Access Management (IAM) roles that link to your Red Hat organization. After deleting your cluster, you can unlink and delete the roles by using the ROSA CLI ( rosa ). Important The OpenShift Cluster Manager and user IAM roles are required if you want to use OpenShift Cluster Manager to install and manage other ROSA with HCP in the same AWS account. Only remove the roles if you no longer need to use the OpenShift Cluster Manager to install ROSA with HCP clusters. Prerequisites You created OpenShift Cluster Manager and user IAM roles and linked them to your Red Hat organization. You have installed and configured the latest ROSA CLI ( rosa ) on your installation host. You have organization administrator privileges in your Red Hat organization. Procedure Unlink the OpenShift Cluster Manager IAM role from your Red Hat organization and delete the role: List the OpenShift Cluster Manager IAM roles in your AWS account: USD rosa list ocm-roles Example output I: Fetching ocm roles ROLE NAME ROLE ARN LINKED ADMIN AWS Managed ManagedOpenShift-OCM-Role-<red_hat_organization_external_id> arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-OCM-Role-<red_hat_organization_external_id> Yes Yes Yes If your OpenShift Cluster Manager IAM role is listed as linked in the output of the preceding command, unlink the role from your Red Hat organization by running the following command: USD rosa unlink ocm-role --role-arn <arn> 1 1 Replace <arn> with the Amazon Resource Name (ARN) for your OpenShift Cluster Manager IAM role. The ARN is specified in the output of the preceding command. In the preceding example, the ARN is in the format arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-OCM-Role-<red_hat_organization_external_id> . Example output I: Unlinking OCM role ? Unlink the 'arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-OCM-Role-<red_hat_organization_external_id>' role from organization '<red_hat_organization_id>'? Yes I: Successfully unlinked role-arn 'arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-OCM-Role-<red_hat_organization_external_id>' from organization account '<red_hat_organization_id>' Delete the OpenShift Cluster Manager IAM role and policies: USD rosa delete ocm-role --role-arn <arn> Example output I: Deleting OCM role ? OCM Role ARN: arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-OCM-Role-<red_hat_organization_external_id> ? Delete 'arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-OCM-Role-<red_hat_organization_external_id>' ocm role? Yes ? OCM role deletion mode: auto 1 I: Successfully deleted the OCM role 1 Specifies the deletion mode. You can use auto mode to automatically delete the OpenShift Cluster Manager IAM role and policies. In manual mode, the ROSA CLI generates the aws commands needed to delete the role and policies. manual mode enables you to review the details before running the aws commands manually. Unlink the user IAM role from your Red Hat organization and delete the role: List the user IAM roles in your AWS account: USD rosa list user-roles Example output I: Fetching user roles ROLE NAME ROLE ARN LINKED ManagedOpenShift-User-<ocm_user_name>-Role arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-User-<ocm_user_name>-Role Yes If your user IAM role is listed as linked in the output of the preceding command, unlink the role from your Red Hat organization: USD rosa unlink user-role --role-arn <arn> 1 1 Replace <arn> with the Amazon Resource Name (ARN) for your user IAM role. The ARN is specified in the output of the preceding command. In the preceding example, the ARN is in the format arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-User-<ocm_user_name>-Role . Example output I: Unlinking user role ? Unlink the 'arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-User-<ocm_user_name>-Role' role from the current account '<ocm_user_account_id>'? Yes I: Successfully unlinked role ARN 'arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-User-<ocm_user_name>-Role' from account '<ocm_user_account_id>' Delete the user IAM role: USD rosa delete user-role --role-arn <arn> Example output I: Deleting user role ? User Role ARN: arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-User-<ocm_user_name>-Role ? Delete the 'arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-User-<ocm_user_name>-Role' role from the AWS account? Yes ? User role deletion mode: auto 1 I: Successfully deleted the user role 1 Specifies the deletion mode. You can use auto mode to automatically delete the user IAM role. In manual mode, the ROSA CLI generates the aws command needed to delete the role. manual mode enables you to review the details before running the aws command manually.
[ "rosa describe cluster --cluster=<cluster_name>", "Name: test_cluster Domain Prefix: test_cluster Display Name: test_cluster ID: <cluster_id> 1 External ID: <external_id> Control Plane: ROSA Service Hosted OpenShift Version: 4.18.0 Channel Group: stable DNS: test_cluster.l3cn.p3.openshiftapps.com AWS Account: <AWS_id> AWS Billing Account: <AWS_id> API URL: https://api.test_cluster.l3cn.p3.openshiftapps.com:443 Console URL: Region: us-east-1 Availability: - Control Plane: MultiAZ - Data Plane: SingleAZ Nodes: - Compute (desired): 2 - Compute (current): 0 Network: - Type: OVNKubernetes - Service CIDR: 172.30.0.0/16 - Machine CIDR: 10.0.0.0/16 - Pod CIDR: 10.128.0.0/14 - Host Prefix: /23 - Subnets: <subnet_ids> EC2 Metadata Http Tokens: optional Role (STS) ARN: arn:aws:iam::<AWS_id>:role/test_cluster-HCP-ROSA-Installer-Role Support Role ARN: arn:aws:iam::<AWS_id>:role/test_cluster-HCP-ROSA-Support-Role Instance IAM Roles: - Worker: arn:aws:iam::<AWS_id>:role/test_cluster-HCP-ROSA-Worker-Role Operator IAM Roles: 2 - arn:aws:iam::<AWS_id>:role/test_cluster-openshift-cloud-network-config-controller-cloud-crede - arn:aws:iam::<AWS_id>:role/test_cluster-openshift-image-registry-installer-cloud-credentials - arn:aws:iam::<AWS_id>:role/test_cluster-openshift-ingress-operator-cloud-credentials - arn:aws:iam::<AWS_id>:role/test_cluster-kube-system-kube-controller-manager - arn:aws:iam::<AWS_id>:role/test_cluster-kube-system-capa-controller-manager - arn:aws:iam::<AWS_id>:role/test_cluster-kube-system-control-plane-operator - arn:aws:iam::<AWS_id>:role/hcpcluster-kube-system-kms-provider - arn:aws:iam::<AWS_id>:role/test_cluster-openshift-cluster-csi-drivers-ebs-cloud-credentials Managed Policies: Yes State: ready Private: No Created: Apr 16 2024 20:32:06 UTC User Workload Monitoring: Enabled Details Page: https://console.redhat.com/openshift/details/s/<cluster_id> OIDC Endpoint URL: https://oidc.op1.openshiftapps.com/<cluster_id> (Managed) 3 Audit Log Forwarding: Disabled External Authentication: Disabled", "rosa delete cluster --cluster=<cluster_name> --watch", "rosa delete operator-roles --prefix <operator_role_prefix>", "rosa delete oidc-provider --oidc-config-id <oidc_config_id>", "rosa list account-roles", "I: Fetching account roles ROLE NAME ROLE TYPE ROLE ARN OPENSHIFT VERSION AWS Managed ManagedOpenShift-HCP-ROSA-Installer-Role Installer arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-HCP-ROSA-Installer-Role 4.18 Yes ManagedOpenShift-HCP-ROSA-Support-Role Support arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-HCP-ROSA-Support-Role 4.18 Yes ManagedOpenShift-HCP-ROSA-Worker-Role Worker arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-HCP-ROSA-Worker-Role 4.18 Yes", "rosa delete account-roles --prefix <prefix> --mode auto 1", "W: There are no classic account roles to be deleted I: Deleting hosted CP account roles ? Delete the account role 'delete-rosa-HCP-ROSA-Installer-Role'? Yes I: Deleting account role 'delete-rosa-HCP-ROSA-Installer-Role' ? Delete the account role 'delete-rosa-HCP-ROSA-Support-Role'? Yes I: Deleting account role 'delete-rosa-HCP-ROSA-Support-Role' ? Delete the account role 'delete-rosa-HCP-ROSA-Worker-Role'? Yes I: Deleting account role 'delete-rosa-HCP-ROSA-Worker-Role' I: Successfully deleted the hosted CP account roles", "rosa list ocm-roles", "I: Fetching ocm roles ROLE NAME ROLE ARN LINKED ADMIN AWS Managed ManagedOpenShift-OCM-Role-<red_hat_organization_external_id> arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-OCM-Role-<red_hat_organization_external_id> Yes Yes Yes", "rosa unlink ocm-role --role-arn <arn> 1", "I: Unlinking OCM role ? Unlink the 'arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-OCM-Role-<red_hat_organization_external_id>' role from organization '<red_hat_organization_id>'? Yes I: Successfully unlinked role-arn 'arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-OCM-Role-<red_hat_organization_external_id>' from organization account '<red_hat_organization_id>'", "rosa delete ocm-role --role-arn <arn>", "I: Deleting OCM role ? OCM Role ARN: arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-OCM-Role-<red_hat_organization_external_id> ? Delete 'arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-OCM-Role-<red_hat_organization_external_id>' ocm role? Yes ? OCM role deletion mode: auto 1 I: Successfully deleted the OCM role", "rosa list user-roles", "I: Fetching user roles ROLE NAME ROLE ARN LINKED ManagedOpenShift-User-<ocm_user_name>-Role arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-User-<ocm_user_name>-Role Yes", "rosa unlink user-role --role-arn <arn> 1", "I: Unlinking user role ? Unlink the 'arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-User-<ocm_user_name>-Role' role from the current account '<ocm_user_account_id>'? Yes I: Successfully unlinked role ARN 'arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-User-<ocm_user_name>-Role' from account '<ocm_user_account_id>'", "rosa delete user-role --role-arn <arn>", "I: Deleting user role ? User Role ARN: arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-User-<ocm_user_name>-Role ? Delete the 'arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-User-<ocm_user_name>-Role' role from the AWS account? Yes ? User role deletion mode: auto 1 I: Successfully deleted the user role" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/install_rosa_with_hcp_clusters/rosa-hcp-deleting-cluster
Chapter 7. Troubleshooting alerts and errors in OpenShift Data Foundation
Chapter 7. Troubleshooting alerts and errors in OpenShift Data Foundation 7.1. Resolving alerts and errors Red Hat OpenShift Data Foundation can detect and automatically resolve a number of common failure scenarios. However, some problems require administrator intervention. To know the errors currently firing, check one of the following locations: Observe Alerting Firing option Home Overview Cluster tab Storage Data Foundation Storage System storage system link in the pop up Overview Block and File tab Storage Data Foundation Storage System storage system link in the pop up Overview Object tab Copy the error displayed and search it in the following section to know its severity and resolution: Name : CephMonVersionMismatch Message : There are multiple versions of storage services running. Description : There are {{ USDvalue }} different versions of Ceph Mon components running. Severity : Warning Resolution : Fix Procedure : Inspect the user interface and log, and verify if an update is in progress. If an update is in progress, this alert is temporary. If an update is not in progress, restart the upgrade process. Name : CephOSDVersionMismatch Message : There are multiple versions of storage services running. Description : There are {{ USDvalue }} different versions of Ceph OSD components running. Severity : Warning Resolution : Fix Procedure : Inspect the user interface and log, and verify if an update is in progress. If an update is in progress, this alert is temporary. If an update is not in progress, restart the upgrade process. Name : CephClusterCriticallyFull Message : Storage cluster is critically full and needs immediate expansion Description : Storage cluster utilization has crossed 85%. Severity : Crtical Resolution : Fix Procedure : Remove unnecessary data or expand the cluster. Name : CephClusterNearFull Fixed : Storage cluster is nearing full. Expansion is required. Description : Storage cluster utilization has crossed 75%. Severity : Warning Resolution : Fix Procedure : Remove unnecessary data or expand the cluster. Name : NooBaaBucketErrorState Message : A NooBaa Bucket Is In Error State Description : A NooBaa bucket {{ USDlabels.bucket_name }} is in error state for more than 6m Severity : Warning Resolution : Workaround Procedure : Finding the error code of an unhealthy bucket Name : NooBaaNamespaceResourceErrorState Message : A NooBaa Namespace Resource Is In Error State Description : A NooBaa namespace resource {{ USDlabels.namespace_resource_name }} is in error state for more than 5m Severity : Warning Resolution : Fix Procedure : Finding the error code of an unhealthy namespace store resource Name : NooBaaNamespaceBucketErrorState Message : A NooBaa Namespace Bucket Is In Error State Description : A NooBaa namespace bucket {{ USDlabels.bucket_name }} is in error state for more than 5m Severity : Warning Resolution : Fix Procedure : Finding the error code of an unhealthy bucket Name : CephMdsMissingReplicas Message : Insufficient replicas for storage metadata service. Description : `Minimum required replicas for storage metadata service not available. Might affect the working of storage cluster.` Severity : Warning Resolution : Contact Red Hat support Procedure : Check for alerts and operator status. If the issue cannot be identified, contact Red Hat support . Name : CephMgrIsAbsent Message : Storage metrics collector service not available anymore. Description : Ceph Manager has disappeared from Prometheus target discovery. Severity : Critical Resolution : Contact Red Hat support Procedure : Inspect the user interface and log, and verify if an update is in progress. If an update is in progress, this alert is temporary. If an update is not in progress, restart the upgrade process. Once the upgrade is complete, check for alerts and operator status. If the issue persists or cannot be identified, contact Red Hat support . Name : CephNodeDown Message : Storage node {{ USDlabels.node }} went down Description : Storage node {{ USDlabels.node }} went down. Check the node immediately. Severity : Critical Resolution : Contact Red Hat support Procedure : Check which node stopped functioning and its cause. Take appropriate actions to recover the node. If node cannot be recovered: See Replacing storage nodes for Red Hat OpenShift Data Foundation Contact Red Hat support . Name : CephClusterErrorState Message : Storage cluster is in error state Description : Storage cluster is in error state for more than 10m. Severity : Critical Resolution : Contact Red Hat support Procedure : Check for alerts and operator status. If the issue cannot be identified, download log files and diagnostic information using must-gather . Open a Support Ticket with Red Hat Support with an attachment of the output of must-gather. Name : CephClusterWarningState Message : Storage cluster is in degraded state Description : Storage cluster is in warning state for more than 10m. Severity : Warning Resolution : Contact Red Hat support Procedure : Check for alerts and operator status. If the issue cannot be identified, download log files and diagnostic information using must-gather . Open a Support Ticket with Red Hat Support with an attachment of the output of must-gather. Name : CephDataRecoveryTakingTooLong Message : Data recovery is slow Description : Data recovery has been active for too long. Severity : Warning Resolution : Contact Red Hat support Name : CephOSDDiskNotResponding Message : Disk not responding Description : Disk device {{ USDlabels.device }} not responding, on host {{ USDlabels.host }}. Severity : Critical Resolution : Contact Red Hat support Name : CephOSDDiskUnavailable Message : Disk not accessible Description : Disk device {{ USDlabels.device }} not accessible on host {{ USDlabels.host }}. Severity : Critical Resolution : Contact Red Hat support Name : CephPGRepairTakingTooLong Message : Self heal problems detected Description : Self heal operations taking too long. Severity : Warning Resolution : Contact Red Hat support Name : CephMonHighNumberOfLeaderChanges Message : Storage Cluster has seen many leader changes recently. Description : 'Ceph Monitor "{{ USDlabels.job }}": instance {{ USDlabels.instance }} has seen {{ USDvalue printf "%.2f" }} leader changes per minute recently.' Severity : Warning Resolution : Contact Red Hat support Name : CephMonQuorumAtRisk Message : Storage quorum at risk Description : Storage cluster quorum is low. Severity : Critical Resolution : Contact Red Hat support Name : ClusterObjectStoreState Message : Cluster Object Store is in an unhealthy state. Check Ceph cluster health . Description : Cluster Object Store is in an unhealthy state for more than 15s. Check Ceph cluster health . Severity : Critical Resolution : Contact Red Hat support Procedure : Check the CephObjectStore CR instance. Contact Red Hat support . Name : CephOSDFlapping Message : Storage daemon osd.x has restarted 5 times in the last 5 minutes. Check the pod events or Ceph status to find out the cause . Description : Storage OSD restarts more than 5 times in 5 minutes . Severity : Critical Resolution : Contact Red Hat support Name : OdfPoolMirroringImageHealth Message : Mirroring image(s) (PV) in the pool <pool-name> are in Warning state for more than a 1m. Mirroring might not work as expected. Description : Disaster recovery is failing for one or a few applications. Severity : Warning Resolution : Contact Red Hat support Name : OdfMirrorDaemonStatus Message : Mirror daemon is unhealthy . Description : Disaster recovery is failing for the entire cluster. Mirror daemon is in an unhealthy status for more than 1m. Mirroring on this cluster is not working as expected. Severity : Critical Resolution : Contact Red Hat support 7.2. Resolving cluster health issues There is a finite set of possible health messages that a Red Hat Ceph Storage cluster can raise that show in the OpenShift Data Foundation user interface. These are defined as health checks which have unique identifiers. The identifier is a terse pseudo-human-readable string that is intended to enable tools to make sense of health checks, and present them in a way that reflects their meaning. Click the health code below for more information and troubleshooting. Health code Description MON_DISK_LOW One or more Ceph Monitors are low on disk space. 7.2.1. MON_DISK_LOW This alert triggers if the available space on the file system storing the monitor database as a percentage, drops below mon_data_avail_warn (default: 15%). This may indicate that some other process or user on the system is filling up the same file system used by the monitor. It may also indicate that the monitor's database is large. Note The paths to the file system differ depending on the deployment of your mons. You can find the path to where the mon is deployed in storagecluster.yaml . Example paths: Mon deployed over PVC path: /var/lib/ceph/mon Mon deployed over hostpath: /var/lib/rook/mon In order to clear up space, view the high usage files in the file system and choose which to delete. To view the files, run: Replace <path-in-the-mon-node> with the path to the file system where mons are deployed. 7.3. Resolving cluster alerts There is a finite set of possible health alerts that a Red Hat Ceph Storage cluster can raise that show in the OpenShift Data Foundation user interface. These are defined as health alerts which have unique identifiers. The identifier is a terse pseudo-human-readable string that is intended to enable tools to make sense of health checks, and present them in a way that reflects their meaning. Click the health alert for more information and troubleshooting. Table 7.1. Types of cluster health alerts Health alert Overview CephClusterCriticallyFull Storage cluster utilization has crossed 80%. CephClusterErrorState Storage cluster is in an error state for more than 10 minutes. CephClusterNearFull Storage cluster is nearing full capacity. Data deletion or cluster expansion is required. CephClusterReadOnly Storage cluster is read-only now and needs immediate data deletion or cluster expansion. CephClusterWarningState Storage cluster is in a warning state for more than 10 mins. CephDataRecoveryTakingTooLong Data recovery has been active for too long. CephMdsCacheUsageHigh Ceph metadata service (MDS) cache usage for the MDS daemon has exceeded 95% of the mds_cache_memory_limit . CephMdsCpuUsageHigh Ceph MDS CPU usage for the MDS daemon has exceeded the threshold for adequate performance. CephMdsMissingReplicas Minimum required replicas for storage metadata service not available. Might affect the working of the storage cluster. CephMgrIsAbsent Ceph Manager has disappeared from Prometheus target discovery. CephMgrIsMissingReplicas Ceph manager is missing replicas. This impacts health status reporting and will cause some of the information reported by the ceph status command to be missing or stale. In addition, the Ceph manager is responsible for a manager framework aimed at expanding the existing capabilities of Ceph. CephMonHighNumberOfLeaderChanges The Ceph monitor leader is being changed an unusual number of times. CephMonQuorumAtRisk Storage cluster quorum is low. CephMonQuorumLost The number of monitor pods in the storage cluster are not enough. CephMonVersionMismatch There are different versions of Ceph Mon components running. CephNodeDown A storage node went down. Check the node immediately. The alert should contain the node name. CephOSDCriticallyFull Utilization of back-end Object Storage Device (OSD) has crossed 80%. Free up some space immediately or expand the storage cluster or contact support. CephOSDDiskNotResponding A disk device is not responding on one of the hosts. CephOSDDiskUnavailable A disk device is not accessible on one of the hosts. CephOSDFlapping Ceph storage OSD flapping. CephOSDNearFull One of the OSD storage devices is nearing full. CephOSDSlowOps OSD requests are taking too long to process. CephOSDVersionMismatch There are different versions of Ceph OSD components running. CephPGRepairTakingTooLong Self-healing operations are taking too long. CephPoolQuotaBytesCriticallyExhausted Storage pool quota usage has crossed 90%. CephPoolQuotaBytesNearExhaustion Storage pool quota usage has crossed 70%. OSDCPULoadHigh CPU usage in the OSD container on a specific pod has exceeded 80%, potentially affecting the performance of the OSD. PersistentVolumeUsageCritical Persistent Volume Claim usage has exceeded more than 85% of its capacity. PersistentVolumeUsageNearFull Persistent Volume Claim usage has exceeded more than 75% of its capacity. 7.3.1. CephClusterCriticallyFull Meaning Storage cluster utilization has crossed 80% and will become read-only at 85%. Your Ceph cluster will become read-only once utilization crosses 85%. Free up some space or expand the storage cluster immediately. It is common to see alerts related to Object Storage Device (OSD) full or near full prior to this alert. Impact High Diagnosis Scaling storage Depending on the type of cluster, you need to add storage devices, nodes, or both. For more information, see the Scaling storage guide . Mitigation Deleting information If it is not possible to scale up the cluster, you need to delete information to free up some space. 7.3.2. CephClusterErrorState Meaning This alert reflects that the storage cluster is in ERROR state for an unacceptable amount of time and thispts the storage availability. Check for other alerts that would have triggered prior to this one and troubleshoot those alerts first. Impact Critical Diagnosis pod status: pending Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems: Set MYPOD as the variable for the pod that is identified as the problem pod: <pod_name> Specify the name of the pod that is identified as the problem pod. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment: pod status: NOT pending, running, but NOT ready Check the readiness probe: pod status: NOT pending, but NOT running Check for application or image issues: Important If a node was assigned, check the kubelet on the node. If the basic health of the running pods, node affinity and resource availability on the nodes are verified, run the Ceph tools to get the status of the storage components. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 7.3.3. CephClusterNearFull Meaning Storage cluster utilization has crossed 75% and will become read-only at 85%. Free up some space or expand the storage cluster. Impact Critical Diagnosis Scaling storage Depending on the type of cluster, you need to add storage devices, nodes, or both. For more information, see the Scaling storage guide . Mitigation Deleting information If it is not possible to scale up the cluster, you need to delete information in order to free up some space. 7.3.4. CephClusterReadOnly Meaning Storage cluster utilization has crossed 85% and will become read-only now. Free up some space or expand the storage cluster immediately. Impact Critical Diagnosis Scaling storage Depending on the type of cluster, you need to add storage devices, nodes, or both. For more information, see the Scaling storage guide . Mitigation Deleting information If it is not possible to scale up the cluster, you need to delete information in order to free up some space. 7.3.5. CephClusterWarningState Meaning This alert reflects that the storage cluster has been in a warning state for an unacceptable amount of time. While the storage operations will continue to function in this state, it is recommended to fix the errors so that the cluster does not get into an error state. Check for other alerts that might have triggered prior to this one and troubleshoot those alerts first. Impact High Diagnosis pod status: pending Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems: Set MYPOD as the variable for the pod that is identified as the problem pod: <pod_name> Specify the name of the pod that is identified as the problem pod. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment: pod status: NOT pending, running, but NOT ready Check the readiness probe: pod status: NOT pending, but NOT running Check for application or image issues: Important If a node was assigned, check the kubelet on the node. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 7.3.6. CephDataRecoveryTakingTooLong Meaning Data recovery is slow. Check whether all the Object Storage Devices (OSDs) are up and running. Impact High Diagnosis pod status: pending Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems: Set MYPOD as the variable for the pod that is identified as the problem pod: <pod_name> Specify the name of the pod that is identified as the problem pod. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment: pod status: NOT pending, running, but NOT ready Check the readiness probe: pod status: NOT pending, but NOT running Check for application or image issues: Important If a node was assigned, check the kubelet on the node. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 7.3.7. CephMdsCacheUsageHigh Meaning When the storage metadata service (MDS) cannot keep its cache usage under the target threshold specified by mds_health_cache_threshold , or 150% of the cache limit set by mds_cache_memory_limit , the MDS sends a health alert to the monitors indicating the cache is too large. As a result, the MDS related operations become slow. Impact High Diagnosis The MDS tries to stay under a reservation of the mds_cache_memory_limit by trimming unused metadata in its cache and recalling cached items in the client caches. It is possible for the MDS to exceed this limit due to slow recall from clients as a result of multiple clients accesing the files. Mitigation Make sure you have enough memory provisioned for MDS cache. Memory resources for the MDS pods need to be updated in the ocs-storageCluster in order to increase the mds_cache_memory_limit . Run the following command to set the memory of MDS pods, for example, 16GB: OpenShift Data Foundation automatically sets mds_cache_memory_limit to half of the MDS pod memory limit. If the memory is set to 8GB using the command, then the operator sets the MDS cache memory limit to 4GB. 7.3.8. CephMdsCpuUsageHigh Meaning The storage metadata service (MDS) serves filesystem metadata. The MDS is crucial for any file creation, rename, deletion, and update operations. MDS by default is allocated two or three CPUs. This does not cause issues as long as there are not too many metadata operations. When the metadata operation load increases enough to trigger this alert, it means the default CPU allocation is unable to cope with load. You need to increase the CPU allocation or run multiple active MDS servers. Impact High Diagnosis Click Workloads Pods . Select the corresponding MDS pod and click on the Metrics tab. There you will see the allocated and used CPU. By default, the alert is fired if the used CPU is 67% of allocated CPU for 6 hours. If this is the case, follow the steps in the mitigation section. Mitigation You need to either do a vertical or a horizontal scaling of CPU. For more information, see the Description and Runbook section of the alert. Use the following command to set the number of allocated CPU for MDS, for example, 8: In order to run multiple active MDS servers, use the following command: Make sure you have enough CPU provisioned for MDS depending on the load. Important Always increase the activeMetadataServers by 1 . The scaling of activeMetadataServers works only if you have more than one PV. If there is only one PV that is causing CPU load, look at increasing the CPU resource as described above. 7.3.9. CephMdsMissingReplicas Meaning Minimum required replicas for the storage metadata service (MDS) are not available. MDS is responsible for filing metadata. Degradation of the MDS service can affect how the storage cluster works (related to the CephFS storage class) and should be fixed as soon as possible. Impact High Diagnosis pod status: pending Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems: Set MYPOD as the variable for the pod that is identified as the problem pod: <pod_name> Specify the name of the pod that is identified as the problem pod. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment: pod status: NOT pending, running, but NOT ready Check the readiness probe: pod status: NOT pending, but NOT running Check for application or image issues: Important If a node was assigned, check the kubelet on the node. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 7.3.10. CephMgrIsAbsent Meaning Not having a Ceph manager running the monitoring of the cluster. Persistent Volume Claim (PVC) creation and deletion requests should be resolved as soon as possible. Impact High Diagnosis Verify that the rook-ceph-mgr pod is failing, and restart if necessary. If the Ceph mgr pod restart fails, follow the general pod troubleshooting to resolve the issue. Verify that the Ceph mgr pod is failing: Describe the Ceph mgr pod for more details: <pod_name> Specify the rook-ceph-mgr pod name from the step. Analyze the errors related to resource issues. Delete the pod, and wait for the pod to restart: Follow these steps for general pod troubleshooting: pod status: pending Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems: Set MYPOD as the variable for the pod that is identified as the problem pod: <pod_name> Specify the name of the pod that is identified as the problem pod. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment: pod status: NOT pending, running, but NOT ready Check the readiness probe: pod status: NOT pending, but NOT running Check for application or image issues: Important If a node was assigned, check the kubelet on the node. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 7.3.11. CephMgrIsMissingReplicas Meaning To resolve this alert, you need to determine the cause of the disappearance of the Ceph manager and restart if necessary. Impact High Diagnosis pod status: pending Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems: Set MYPOD as the variable for the pod that is identified as the problem pod: <pod_name> Specify the name of the pod that is identified as the problem pod. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment: pod status: NOT pending, running, but NOT ready Check the readiness probe: pod status: NOT pending, but NOT running Check for application or image issues: Important If a node was assigned, check the kubelet on the node. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 7.3.12. CephMonHighNumberOfLeaderChanges Meaning In a Ceph cluster there is a redundant set of monitor pods that store critical information about the storage cluster. Monitor pods synchronize periodically to obtain information about the storage cluster. The first monitor pod to get the most updated information becomes the leader, and the other monitor pods will start their synchronization process after asking the leader. A problem in network connection or another kind of problem in one or more monitor pods produces an unusual change of the leader. This situation can negatively affect the storage cluster performance. Impact Medium Important Check for any network issues. If there is a network issue, you need to escalate to the OpenShift Data Foundation team before you proceed with any of the following troubleshooting steps. Diagnosis Print the logs of the affected monitor pod to gather more information about the issue: <rook-ceph-mon-X-yyyy> Specify the name of the affected monitor pod. Alternatively, use the Openshift Web console to open the logs of the affected monitor pod. More information about possible causes is reflected in the log. Perform the general pod troubleshooting steps: pod status: pending Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems: Set MYPOD as the variable for the pod that is identified as the problem pod: <pod_name> Specify the name of the pod that is identified as the problem pod. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment: pod status: NOT pending, running, but NOT ready Check the readiness probe: pod status: NOT pending, but NOT running Check for application or image issues: Important If a node was assigned, check the kubelet on the node. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 7.3.13. CephMonQuorumAtRisk Meaning Multiple MONs work together to provide redundancy. Each of the MONs keeps a copy of the metadata. The cluster is deployed with 3 MONs, and requires 2 or more MONs to be up and running for quorum and for the storage operations to run. If quorum is lost, access to data is at risk. Impact High Diagnosis Restore the Ceph MON Quorum. For more information, see Restoring ceph-monitor quorum in OpenShift Data Foundation in the Troubleshooting guide . If the restoration of the Ceph MON Quorum fails, follow the general pod troubleshooting to resolve the issue. Perform the following for general pod troubleshooting: pod status: pending Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems: Set MYPOD as the variable for the pod that is identified as the problem pod: <pod_name> Specify the name of the pod that is identified as the problem pod. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment: pod status: NOT pending, running, but NOT ready Check the readiness probe: pod status: NOT pending, but NOT running Check for application or image issues: Important If a node was assigned, check the kubelet on the node. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 7.3.14. CephMonQuorumLost Meaning In a Ceph cluster there is a redundant set of monitor pods that store critical information about the storage cluster. Monitor pods synchronize periodically to obtain information about the storage cluster. The first monitor pod to get the most updated information becomes the leader, and the other monitor pods will start their synchronization process after asking the leader. A problem in network connection or another kind of problem in one or more monitor pods produces an unusual change of the leader. This situation can negatively affect the storage cluster performance. Impact High Important Check for any network issues. If there is a network issue, you need to escalate to the OpenShift Data Foundation team before you proceed with any of the following troubleshooting steps. Diagnosis Restore the Ceph MON Quorum. For more information, see Restoring ceph-monitor quorum in OpenShift Data Foundation in the Troubleshooting guide . If the restoration of the Ceph MON Quorum fails, follow the general pod troubleshooting to resolve the issue. Alternatively, perform general pod troubleshooting: pod status: pending Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems: Set MYPOD as the variable for the pod that is identified as the problem pod: <pod_name> Specify the name of the pod that is identified as the problem pod. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment: pod status: NOT pending, running, but NOT ready Check the readiness probe: pod status: NOT pending, but NOT running Check for application or image issues: Important If a node was assigned, check the kubelet on the node. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 7.3.15. CephMonVersionMismatch Meaning Typically this alert triggers during an upgrade that is taking a long time. Impact Medium Diagnosis Check the ocs-operator subscription status and the operator pod health to check if an operator upgrade is in progress. Check the ocs-operator subscription health. The status condition types are CatalogSourcesUnhealthy , InstallPlanMissing , InstallPlanPending , and InstallPlanFailed . The status for each type should be False . Example output: The example output shows a False status for type CatalogSourcesUnHealthly , which means that the catalog sources are healthy. Check the OCS operator pod status to see if there is an OCS operator upgrading in progress. If you determine that the `ocs-operator`is in progress, wait for 5 mins and this alert should resolve itself. If you have waited or see a different error status condition, continue troubleshooting. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 7.3.16. CephNodeDown Meaning A node running Ceph pods is down. While storage operations will continue to function as Ceph is designed to deal with a node failure, it is recommended to resolve the issue to minimize the risk of another node going down and affecting storage functions. Impact Medium Diagnosis List all the pods that are running and failing: Important Ensure that you meet the OpenShift Data Foundation resource requirements so that the Object Storage Device (OSD) pods are scheduled on the new node. This may take a few minutes as the Ceph cluster recovers data for the failing but now recovering OSD. To watch this recovery in action, ensure that the OSD pods are correctly placed on the new worker node. Check if the OSD pods that were previously failing are now running: If the previously failing OSD pods have not been scheduled, use the describe command and check the events for reasons the pods were not rescheduled. Describe the events for the failing OSD pod: Find the one or more failing OSD pods: In the events section look for the failure reasons, such as the resources are not being met. In addition, you can use the rook-ceph-toolbox to watch the recovery. This step is optional, but is helpful for large Ceph clusters. To access the toolbox, run the following command: From the rsh command prompt, run the following, and watch for "recovery" under the io section: Determine if there are failed nodes. Get the list of worker nodes, and check for the node status: Describe the node which is of the NotReady status to get more information about the failure: Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 7.3.17. CephOSDCriticallyFull Meaning One of the Object Storage Devices (OSDs) is critically full. Expand the cluster immediately. Impact High Diagnosis Deleting data to free up storage space You can delete data, and the cluster will resolve the alert through self healing processes. Important This is only applicable to OpenShift Data Foundation clusters that are near or full but not in read-only mode. Read-only mode prevents any changes that include deleting data, that is, deletion of Persistent Volume Claim (PVC), Persistent Volume (PV) or both. Expanding the storage capacity Current storage size is less than 1 TB You must first assess the ability to expand. For every 1 TB of storage added, the cluster needs to have 3 nodes each with a minimum available 2 vCPUs and 8 GiB memory. You can increase the storage capacity to 4 TB via the add-on and the cluster will resolve the alert through self healing processes. If the minimum vCPU and memory resource requirements are not met, you need to add 3 additional worker nodes to the cluster. Mitigation If your current storage size is equal to 4 TB, contact Red Hat support. Optional: Run the following command to gather the debugging information for the Ceph cluster: 7.3.18. CephOSDDiskNotResponding Meaning A disk device is not responding. Check whether all the Object Storage Devices (OSDs) are up and running. Impact Medium Diagnosis pod status: pending Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems: Set MYPOD as the variable for the pod that is identified as the problem pod: <pod_name> Specify the name of the pod that is identified as the problem pod. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment: pod status: NOT pending, running, but NOT ready Check the readiness probe: pod status: NOT pending, but NOT running Check for application or image issues: Important If a node was assigned, check the kubelet on the node. If the basic health of the running pods, node affinity and resource availability on the nodes are verified, run the Ceph tools to get the status of the storage components. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 7.3.19. CephOSDDiskUnavailable Meaning A disk device is not accessible on one of the hosts and its corresponding Object Storage Device (OSD) is marked out by the Ceph cluster. This alert is raised when a Ceph node fails to recover within 10 minutes. Impact High Diagnosis Determine the failed node Get the list of worker nodes, and check for the node status: Describe the node which is of NotReady status to get more information on the failure: 7.3.20. CephOSDFlapping Meaning A storage daemon has restarted 5 times in the last 5 minutes. Check the pod events or Ceph status to find out the cause. Impact High Diagnosis Follow the steps in the Flapping OSDs section of the Red Hat Ceph Storage Troubleshooting Guide. Alternatively, follow the steps for general pod troubleshooting: pod status: pending Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems: Set MYPOD as the variable for the pod that is identified as the problem pod: <pod_name> Specify the name of the pod that is identified as the problem pod. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment: pod status: NOT pending, running, but NOT ready Check the readiness probe: pod status: NOT pending, but NOT running Check for application or image issues: Important If a node was assigned, check the kubelet on the node. If the basic health of the running pods, node affinity and resource availability on the nodes are verified, run the Ceph tools to get the status of the storage components. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 7.3.21. CephOSDNearFull Meaning Utilization of back-end storage device Object Storage Device (OSD) has crossed 75% on a host. Impact High Mitigation Free up some space in the cluster, expand the storage cluster, or contact Red Hat support. For more information on scaling storage, see the Scaling storage guide . 7.3.22. CephOSDSlowOps Meaning An Object Storage Device (OSD) with slow requests is every OSD that is not able to service the I/O operations per second (IOPS) in the queue within the time defined by the osd_op_complaint_time parameter. By default, this parameter is set to 30 seconds. Impact Medium Diagnosis More information about the slow requests can be obtained using the Openshift console. Access the OSD pod terminal, and run the following commands: Note The number of the OSD is seen in the pod name. For example, in rook-ceph-osd-0-5d86d4d8d4-zlqkx , <0> is the OSD. Mitigation The main causes of the OSDs having slow requests are: Problems with the underlying hardware or infrastructure, such as, disk drives, hosts, racks, or network switches. Use the Openshift monitoring console to find the alerts or errors about cluster resources. This can give you an idea about the root cause of the slow operations in the OSD. Problems with the network. These problems are usually connected with flapping OSDs. See the Flapping OSDs section of the Red Hat Ceph Storage Troubleshooting Guide If it is a network issue, escalate to the OpenShift Data Foundation team System load. Use the Openshift console to review the metrics of the OSD pod and the node which is running the OSD. Adding or assigning more resources can be a possible solution. 7.3.23. CephOSDVersionMismatch Meaning Typically this alert triggers during an upgrade that is taking a long time. Impact Medium Diagnosis Check the ocs-operator subscription status and the operator pod health to check if an operator upgrade is in progress. Check the ocs-operator subscription health. The status condition types are CatalogSourcesUnhealthy , InstallPlanMissing , InstallPlanPending , and InstallPlanFailed . The status for each type should be False . Example output: The example output shows a False status for type CatalogSourcesUnHealthly , which means that the catalog sources are healthy. Check the OCS operator pod status to see if there is an OCS operator upgrading in progress. If you determine that the `ocs-operator`is in progress, wait for 5 mins and this alert should resolve itself. If you have waited or see a different error status condition, continue troubleshooting. 7.3.24. CephPGRepairTakingTooLong Meaning Self-healing operations are taking too long. Impact High Diagnosis Check for inconsistent Placement Groups (PGs), and repair them. For more information, see the Red Hat Knowledgebase solution Handle Inconsistent Placement Groups in Ceph . 7.3.25. CephPoolQuotaBytesCriticallyExhausted Meaning One or more pools has reached, or is very close to reaching, its quota. The threshold to trigger this error condition is controlled by the mon_pool_quota_crit_threshold configuration option. Impact High Mitigation Adjust the pool quotas. Run the following commands to fully remove or adjust the pool quotas up or down: Setting the quota value to 0 will disable the quota. 7.3.26. CephPoolQuotaBytesNearExhaustion Meaning One or more pools is approaching a configured fullness threshold. One threshold that can trigger this warning condition is the mon_pool_quota_warn_threshold configuration option. Impact High Mitigation Adjust the pool quotas. Run the following commands to fully remove or adjust the pool quotas up or down: Setting the quota value to 0 will disable the quota. 7.3.27. OSDCPULoadHigh Meaning OSD is a critical component in Ceph storage, responsible for managing data placement and recovery. High CPU usage in the OSD container suggests increased processing demands, potentially leading to degraded storage performance. Impact High Diagnosis Navigate to the Kubernetes dashboard or equivalent. Access the Workloads section and select the relevant pod associated with the OSD alert. Click the Metrics tab to view CPU metrics for the OSD container. Verify that the CPU usage exceeds 80% over a significant period (as specified in the alert configuration). Mitigation If the OSD CPU usage is consistently high, consider taking the following steps: Evaluate the overall storage cluster performance and identify the OSDs contributing to high CPU usage. Increase the number of OSDs in the cluster by adding more new storage devices in the existing nodes or adding new nodes with new storage devices. Review the Scaling storage4 for instructions to help distribute the load and improve overall system performance. 7.3.28. PersistentVolumeUsageCritical Meaning A Persistent Volume Claim (PVC) is nearing its full capacity and may lead to data loss if not attended to timely. Impact High Mitigation Expand the PVC size to increase the capacity. Log in to the OpenShift Web Console. Click Storage PersistentVolumeClaim . Select openshift-storage from the Project drop-down list. On the PVC you want to expand, click Action menu (...) Expand PVC . Update the Total size to the desired size. Click Expand . Alternatively, you can delete unnecessary data that may be taking up space. 7.3.29. PersistentVolumeUsageNearFull Meaning A Persistent Volume Claim (PVC) is nearing its full capacity and may lead to data loss if not attended to timely. Impact High Mitigation Expand the PVC size to increase the capacity. Log in to the OpenShift Web Console. Click Storage PersistentVolumeClaim . Select openshift-storage from the Project drop-down list. On the PVC you want to expand, click Action menu (...) Expand PVC . Update the Total size to the desired size. Click Expand . Alternatively, you can delete unnecessary data that may be taking up space. 7.4. Finding the error code of an unhealthy bucket Procedure In the OpenShift Web Console, click Storage Object Storage . Click the Object Bucket Claims tab. Look for the object bucket claims (OBCs) that are not in Bound state and click on it. Click the Events tab and do one of the following: Look for events that might hint you about the current state of the bucket. Click the YAML tab and look for related errors around the status and mode sections of the YAML. If the OBC is in Pending state. the error might appear in the product logs. However, in this case, it is recommended to verify that all the variables provided are accurate. 7.5. Finding the error code of an unhealthy namespace store resource Procedure In the OpenShift Web Console, click Storage Object Storage . Click the Namespace Store tab. Look for the namespace store resources that are not in Bound state and click on it. Click the Events tab and do one of the following: Look for events that might hint you about the current state of the resource. Click the YAML tab and look for related errors around the status and mode sections of the YAML. 7.6. Recovering pods When a first node (say NODE1 ) goes to NotReady state because of some issue, the hosted pods that are using PVC with ReadWriteOnce (RWO) access mode try to move to the second node (say NODE2 ) but get stuck due to multi-attach error. In such a case, you can recover MON, OSD, and application pods by using the following steps. Procedure Power off NODE1 (from AWS or vSphere side) and ensure that NODE1 is completely down. Force delete the pods on NODE1 by using the following command: 7.7. Recovering from EBS volume detach When an OSD or MON elastic block storage (EBS) volume where the OSD disk resides is detached from the worker Amazon EC2 instance, the volume gets reattached automatically within one or two minutes. However, the OSD pod gets into a CrashLoopBackOff state. To recover and bring back the pod to Running state, you must restart the EC2 instance. 7.8. Enabling and disabling debug logs for rook-ceph-operator Enable the debug logs for the rook-ceph-operator to obtain information about failures that help in troubleshooting issues. Procedure Enabling the debug logs Edit the configmap of the rook-ceph-operator. Add the ROOK_LOG_LEVEL: DEBUG parameter in the rook-ceph-operator-config yaml file to enable the debug logs for rook-ceph-operator. Now, the rook-ceph-operator logs consist of the debug information. Disabling the debug logs Edit the configmap of the rook-ceph-operator. Add the ROOK_LOG_LEVEL: INFO parameter in the rook-ceph-operator-config yaml file to disable the debug logs for rook-ceph-operator. 7.9. Resolving low Ceph monitor count alert The CephMonLowNumber alert is displayed in the notification panel or Alert Center of the OpenShift Web Console to indicate the low number of Ceph monitor count when your internal mode deployment has five or more nodes, racks, or rooms, and when there are five or more number of failure domains in the deployment. You can increase the Ceph monitor count to improve the availability of cluster. Procedure In the CephMonLowNumber alert of the notification panel or Alert Center of OpenShift Web Console, click Configure . In the Configure Ceph Monitor pop up, click Update count. In the pop up, the recommended monitor count depending on the number of failure zones is shown. In the Configure CephMon pop up, update the monitor count value based on the recommended value and click Save changes . 7.10. Troubleshooting unhealthy blocklisted nodes 7.10.1. ODFRBDClientBlocked Meaning This alert indicates that an RADOS Block Device (RBD) client might be blocked by Ceph on a specific node within your Kubernetes cluster. The blocklisting occurs when the ocs_rbd_client_blocklisted metric reports a value of 1 for the node. Additionally, there are pods in a CreateContainerError state on the same node. The blocklisting can potentially result in the filesystem for the Persistent Volume Claims (PVCs) using RBD becoming read-only. It is crucial to investigate this alert to prevent any disruption to your storage cluster. Impact High Diagnosis The blocklisting of an RBD client can occur due to several factors, such as network or cluster slowness. In certain cases, the exclusive lock contention among three contending clients (workload, mirror daemon, and manager/scheduler) can lead to the blocklist. Mitigation Taint the blocklisted node: In Kubernetes, consider tainting the node that is blocklisted to trigger the eviction of pods to another node. This approach relies on the assumption that the unmounting/unmapping process progresses gracefully. Once the pods have been successfully evicted, the blocklisted node can be untainted, allowing the blocklist to be cleared. The pods can then be moved back to the untainted node. Reboot the blocklisted node: If tainting the node and evicting the pods do not resolve the blocklisting issue, a reboot of the blocklisted node can be attempted. This step may help alleviate any underlying issues causing the blocklist and restore normal functionality. Important Investigating and resolving the blocklist issue promptly is essential to avoid any further impact on the storage cluster.
[ "du -a <path-in-the-mon-node> |sort -n -r |head -n10", "oc project openshift-storage", "oc get pod | grep rook-ceph", "Examine the output for a rook-ceph that is in the pending state, not running or not ready MYPOD= <pod_name>", "oc get pod/USD{MYPOD} -o wide", "oc describe pod/USD{MYPOD}", "oc logs pod/USD{MYPOD}", "oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15", "oc project openshift-storage", "get pod | grep {ceph-component}", "Examine the output for a {ceph-component} that is in the pending state, not running or not ready MYPOD= <pod_name>", "oc get pod/USD{MYPOD} -o wide", "oc describe pod/USD{MYPOD}", "oc logs pod/USD{MYPOD}", "oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15", "oc project openshift-storage", "get pod | grep rook-ceph-osd", "Examine the output for a {ceph-component} that is in the pending state, not running or not ready MYPOD= <pod_name>", "oc get pod/USD{MYPOD} -o wide", "oc describe pod/USD{MYPOD}", "oc logs pod/USD{MYPOD}", "oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15", "oc patch -n openshift-storage storagecluster ocs-storagecluster --type merge --patch '{\"spec\": {\"resources\": {\"mds\": {\"limits\": {\"memory\": \"16Gi\"},\"requests\": {\"memory\": \"16Gi\"}}}}}'", "patch -n openshift-storage storagecluster ocs-storagecluster --type merge --patch '{\"spec\": {\"resources\": {\"mds\": {\"limits\": {\"cpu\": \"8\"}, \"requests\": {\"cpu\": \"8\"}}}}}'", "patch -n openshift-storage storagecluster ocs-storagecluster --type merge --patch '{\"spec\": {\"managedResources\": {\"cephFilesystems\":{\"activeMetadataServers\": 2}}}}'", "oc project openshift-storage", "get pod | grep rook-ceph-mds", "Examine the output for a {ceph-component} that is in the pending state, not running or not ready MYPOD= <pod_name>", "oc get pod/USD{MYPOD} -o wide", "oc describe pod/USD{MYPOD}", "oc logs pod/USD{MYPOD}", "oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15", "oc get pods | grep mgr", "oc describe pods/ <pod_name>", "oc get pods | grep mgr", "oc project openshift-storage", "get pod | grep rook-ceph-mgr", "Examine the output for a {ceph-component} that is in the pending state, not running or not ready MYPOD= <pod_name>", "oc get pod/USD{MYPOD} -o wide", "oc describe pod/USD{MYPOD}", "oc logs pod/USD{MYPOD}", "oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15", "oc project openshift-storage", "get pod | grep rook-ceph-mgr", "Examine the output for a {ceph-component} that is in the pending state, not running or not ready MYPOD= <pod_name>", "oc get pod/USD{MYPOD} -o wide", "oc describe pod/USD{MYPOD}", "oc logs pod/USD{MYPOD}", "oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15", "oc logs <rook-ceph-mon-X-yyyy> -n openshift-storage", "oc project openshift-storage", "get pod | grep {ceph-component}", "Examine the output for a {ceph-component} that is in the pending state, not running or not ready MYPOD= <pod_name>", "oc get pod/USD{MYPOD} -o wide", "oc describe pod/USD{MYPOD}", "oc logs pod/USD{MYPOD}", "oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15", "oc project openshift-storage", "get pod | grep rook-ceph-mon", "Examine the output for a {ceph-component} that is in the pending state, not running or not ready MYPOD= <pod_name>", "oc get pod/USD{MYPOD} -o wide", "oc describe pod/USD{MYPOD}", "oc logs pod/USD{MYPOD}", "oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15", "oc project openshift-storage", "get pod | grep {ceph-component}", "Examine the output for a {ceph-component} that is in the pending state, not running or not ready MYPOD= <pod_name>", "oc get pod/USD{MYPOD} -o wide", "oc describe pod/USD{MYPOD}", "oc logs pod/USD{MYPOD}", "oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15", "oc get sub USD(oc get pods -n openshift-storage | grep -v ocs-operator) -n openshift-storage -o json | jq .status.conditions", "[ { \"lastTransitionTime\": \"2021-01-26T19:21:37Z\", \"message\": \"all available catalogsources are healthy\", \"reason\": \"AllCatalogSourcesHealthy\", \"status\": \"False\", \"type\": \"CatalogSourcesUnhealthy\" } ]", "oc get pod -n openshift-storage | grep ocs-operator OCSOP=USD(oc get pod -n openshift-storage -o custom-columns=POD:.metadata.name --no-headers | grep ocs-operator) echo USDOCSOP oc get pod/USD{OCSOP} -n openshift-storage oc describe pod/USD{OCSOP} -n openshift-storage", "oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15", "-n openshift-storage get pods", "-n openshift-storage get pods", "-n openshift-storage get pods | grep osd", "-n openshift-storage describe pods/<osd_podname_ from_the_ previous step>", "TOOLS_POD=USD(oc get pods -n openshift-storage -l app=rook-ceph-tools -o name) rsh -n openshift-storage USDTOOLS_POD", "ceph status", "get nodes --selector='node-role.kubernetes.io/worker','!node-role.kubernetes.io/infra'", "describe node <node_name>", "oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15", "oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15", "oc project openshift-storage", "oc get pod | grep rook-ceph", "Examine the output for a rook-ceph that is in the pending state, not running or not ready MYPOD= <pod_name>", "oc get pod/USD{MYPOD} -o wide", "oc describe pod/USD{MYPOD}", "oc logs pod/USD{MYPOD}", "oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15", "get nodes --selector='node-role.kubernetes.io/worker','!node-role.kubernetes.io/infra'", "describe node <node_name>", "oc project openshift-storage", "oc get pod | grep rook-ceph", "Examine the output for a rook-ceph that is in the pending state, not running or not ready MYPOD= <pod_name>", "oc get pod/USD{MYPOD} -o wide", "oc describe pod/USD{MYPOD}", "oc logs pod/USD{MYPOD}", "oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15", "ceph daemon osd.<id> ops", "ceph daemon osd.<id> dump_historic_ops", "oc get sub USD(oc get pods -n openshift-storage | grep -v ocs-operator) -n openshift-storage -o json | jq .status.conditions", "[ { \"lastTransitionTime\": \"2021-01-26T19:21:37Z\", \"message\": \"all available catalogsources are healthy\", \"reason\": \"AllCatalogSourcesHealthy\", \"status\": \"False\", \"type\": \"CatalogSourcesUnhealthy\" } ]", "oc get pod -n openshift-storage | grep ocs-operator OCSOP=USD(oc get pod -n openshift-storage -o custom-columns=POD:.metadata.name --no-headers | grep ocs-operator) echo USDOCSOP oc get pod/USD{OCSOP} -n openshift-storage oc describe pod/USD{OCSOP} -n openshift-storage", "ceph osd pool set-quota <pool> max_bytes <bytes>", "ceph osd pool set-quota <pool> max_objects <objects>", "ceph osd pool set-quota <pool> max_bytes <bytes>", "ceph osd pool set-quota <pool> max_objects <objects>", "oc delete pod <pod-name> --grace-period=0 --force", "oc edit configmap rook-ceph-operator-config", "... data: # The logging level for the operator: INFO | DEBUG ROOK_LOG_LEVEL: DEBUG", "oc edit configmap rook-ceph-operator-config", "... data: # The logging level for the operator: INFO | DEBUG ROOK_LOG_LEVEL: INFO" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/troubleshooting_openshift_data_foundation/troubleshooting-alerts-and-errors-in-openshift-data-foundation
Chapter 4. Determining hardware and OS configuration
Chapter 4. Determining hardware and OS configuration CPU The more physical cores that are available to Satellite, the higher throughput can be achieved for the tasks. Some of the Satellite components such as Puppet and PostgreSQL are CPU intensive applications and can really benefit from the higher number of available CPU cores. Memory The higher amount of memory available in the system running Satellite, the better will be the response times for the Satellite operations. Since Satellite uses PostgreSQL as the database solutions, any additional memory coupled with the tunings will provide a boost to the response times of the applications due to increased data retention in the memory. Disk With Satellite doing heavy IOPS due to repository synchronizations, package data retrieval, high frequency database updates for the subscription records of the content hosts, it is advised that Satellite be installed on a high speed SSD so as to avoid performance bottlenecks which may happen due to increased disk reads or writes. Satellite requires disk IO to be at or above 60 - 80 megabytes per second of average throughput for read operations. Anything below this value can have severe implications for the operation of the Satellite. Satellite components such as PostgreSQL benefit from using SSDs due to their lower latency compared to HDDs. Network The communication between the Satellite Server and Capsules is impacted by the network performance. A decent network with a minimum jitter and low latency is required to enable hassle free operations such as Satellite Server and Capsules synchronization (at least ensure it is not causing connection resets, etc). Server Power Management Your server by default is likely to be configured to conserve power. While this is a good approach to keep the max power consumption in check, it also has a side effect of lowering the performance that Satellite may be able to achieve. For a server running Satellite, it is recommended to set the BIOS to enable the system to be run in performance mode to boost the maximum performance levels that Satellite can achieve. 4.1. Benchmarking disk performance We are working to update satellite-maintain to only warn the users when its internal quick storage benchmark results in numbers below our recommended throughput. Also working on an updated benchmark script you can run (which will likely be integrated into satellite-maintain in the future) to get a more accurate real-world storage information. Note You may have to temporarily reduce the RAM in order to run the I/O benchmark. For example, if your Satellite Server has 256 GiB RAM, tests would require 512 GiB of storage to run. As a workaround, you can add mem=20G kernel option in grub during system boot to temporary reduce the size of the RAM. The benchmark creates a file twice the size of the RAM in the specified directory and executes a series of storage I/O tests against it. The size of the file ensures that the test is not just testing the filesystem caching. If you benchmark other filesystems, for example smaller volumes such as PostgreSQL storage, you might have to reduce the RAM size as described above. If you are using different storage solutions such as SAN or iSCSI, you can expect a different performance. Red Hat recommends you to stop all services before executing this script and you will be prompted to do so. This test does not use direct I/O and will utilize file caching as normal operations would. You can find our first version of the script storage-benchmark . To execute it, just download the script to your Satellite, make it executable, and run: As noted in the README block in the script, generally you wish to see on average 100MB/sec or higher in the tests below: Local SSD based storage should give values of 600MB/sec or higher. Spinning disks should give values in the range of 100 - 200MB/sec or higher. If you see values below this, please open a support ticket for assistance. For more information, see Impact of Disk Speed on Satellite Operations . 4.2. Enabling tuned profiles On bare-metal, Red Hat recommends to run the throughput-performance tuned profile on Satellite Server and Capsules. On virtual machines, Red Hat recommends to run the virtual-guest profile. Procedure Check if tuned is running: If tuned is not running, enable it: Optional: View a list of available tuned profiles: Enable a tuned profile depending on your scenario: 4.3. Disable Transparent Hugepage Transparent Hugepage is a memory management technique used by the Linux kernel to reduce the overhead of using the Translation Lookaside Buffer (TLB) by using larger sized memory pages. Due to databases having Sparse Memory Access patterns instead of Contiguous Memory access patterns, database workloads often perform poorly when Transparent Hugepage is enabled. To improve PostgreSQL and Redis performance, disable Transparent Hugepage. In deployments where the databases are running on separate servers, there may be a small benefit to using Transparent Hugepage on the Satellite Server only. For more information on how to disable Transparent Hugepage, see How to disable transparent hugepages (THP) on Red Hat Enterprise Linux .
[ "./storage-benchmark /var/lib/pulp", "systemctl status tuned", "systemctl enable --now tuned", "tuned-adm list", "tuned-adm profile \" My_Tuned_Profile \"" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/tuning_performance_of_red_hat_satellite/Determining_Hardware_and_OS_Configuration_performance-tuning
Chapter 1. Overview of authentication and authorization
Chapter 1. Overview of authentication and authorization 1.1. Glossary of common terms for OpenShift Container Platform authentication and authorization This glossary defines common terms that are used in OpenShift Container Platform authentication and authorization. authentication An authentication determines access to an OpenShift Container Platform cluster and ensures only authenticated users access the OpenShift Container Platform cluster. authorization Authorization determines whether the identified user has permissions to perform the requested action. bearer token Bearer token is used to authenticate to API with the header Authorization: Bearer <token> . Cloud Credential Operator The Cloud Credential Operator (CCO) manages cloud provider credentials as custom resource definitions (CRDs). config map A config map provides a way to inject configuration data into the pods. You can reference the data stored in a config map in a volume of type ConfigMap . Applications running in a pod can use this data. containers Lightweight and executable images that consist software and all its dependencies. Because containers virtualize the operating system, you can run containers in a data center, public or private cloud, or your local host. Custom Resource (CR) A CR is an extension of the Kubernetes API. group A group is a set of users. A group is useful for granting permissions to multiple users one time. HTPasswd HTPasswd updates the files that store usernames and password for authentication of HTTP users. Keystone Keystone is an Red Hat OpenStack Platform (RHOSP) project that provides identity, token, catalog, and policy services. Lightweight directory access protocol (LDAP) LDAP is a protocol that queries user information. manual mode In manual mode, a user manages cloud credentials instead of the Cloud Credential Operator (CCO). mint mode Mint mode is the default and recommended best practice setting for the Cloud Credential Operator (CCO) to use on the platforms for which it is supported. In this mode, the CCO uses the provided administrator-level cloud credential to create new credentials for components in the cluster with only the specific permissions that are required. namespace A namespace isolates specific system resources that are visible to all processes. Inside a namespace, only processes that are members of that namespace can see those resources. node A node is a worker machine in the OpenShift Container Platform cluster. A node is either a virtual machine (VM) or a physical machine. OAuth client OAuth client is used to get a bearer token. OAuth server The OpenShift Container Platform control plane includes a built-in OAuth server that determines the user's identity from the configured identity provider and creates an access token. OpenID Connect The OpenID Connect is a protocol to authenticate the users to use single sign-on (SSO) to access sites that use OpenID Providers. passthrough mode In passthrough mode, the Cloud Credential Operator (CCO) passes the provided cloud credential to the components that request cloud credentials. pod A pod is the smallest logical unit in Kubernetes. A pod is comprised of one or more containers to run in a worker node. regular users Users that are created automatically in the cluster upon first login or via the API. request header A request header is an HTTP header that is used to provide information about HTTP request context, so that the server can track the response of the request. role-based access control (RBAC) A key security control to ensure that cluster users and workloads have access to only the resources required to execute their roles. service accounts Service accounts are used by the cluster components or applications. system users Users that are created automatically when the cluster is installed. users Users is an entity that can make requests to API. 1.2. About authentication in OpenShift Container Platform To control access to an OpenShift Container Platform cluster, a cluster administrator can configure user authentication and ensure only approved users access the cluster. To interact with an OpenShift Container Platform cluster, users must first authenticate to the OpenShift Container Platform API in some way. You can authenticate by providing an OAuth access token or an X.509 client certificate in your requests to the OpenShift Container Platform API. Note If you do not present a valid access token or certificate, your request is unauthenticated and you receive an HTTP 401 error. An administrator can configure authentication through the following tasks: Configuring an identity provider: You can define any supported identity provider in OpenShift Container Platform and add it to your cluster. Configuring the internal OAuth server : The OpenShift Container Platform control plane includes a built-in OAuth server that determines the user's identity from the configured identity provider and creates an access token. You can configure the token duration and inactivity timeout, and customize the internal OAuth server URL. Note Users can view and manage OAuth tokens owned by them . Registering an OAuth client: OpenShift Container Platform includes several default OAuth clients . You can register and configure additional OAuth clients . Note When users send a request for an OAuth token, they must specify either a default or custom OAuth client that receives and uses the token. Managing cloud provider credentials using the Cloud Credentials Operator : Cluster components use cloud provider credentials to get permissions required to perform cluster-related tasks. Impersonating a system admin user: You can grant cluster administrator permissions to a user by impersonating a system admin user . 1.3. About authorization in OpenShift Container Platform Authorization involves determining whether the identified user has permissions to perform the requested action. Administrators can define permissions and assign them to users using the RBAC objects, such as rules, roles, and bindings . To understand how authorization works in OpenShift Container Platform, see Evaluating authorization . You can also control access to an OpenShift Container Platform cluster through projects and namespaces . Along with controlling user access to a cluster, you can also control the actions a pod can perform and the resources it can access using security context constraints (SCCs) . You can manage authorization for OpenShift Container Platform through the following tasks: Viewing local and cluster roles and bindings. Creating a local role and assigning it to a user or group. Creating a cluster role and assigning it to a user or group: OpenShift Container Platform includes a set of default cluster roles . You can create additional cluster roles and add them to a user or group . Creating a cluster-admin user: By default, your cluster has only one cluster administrator called kubeadmin . You can create another cluster administrator . Before creating a cluster administrator, ensure that you have configured an identity provider. Note After creating the cluster admin user, delete the existing kubeadmin user to improve cluster security. Creating service accounts: Service accounts provide a flexible way to control API access without sharing a regular user's credentials. A user can create and use a service account in applications and also as an OAuth client . Scoping tokens : A scoped token is a token that identifies as a specific user who can perform only specific operations. You can create scoped tokens to delegate some of your permissions to another user or a service account. Syncing LDAP groups: You can manage user groups in one place by syncing the groups stored in an LDAP server with the OpenShift Container Platform user groups.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/authentication_and_authorization/overview-of-authentication-authorization
Chapter 39. Next steps
Chapter 39. steps Testing a decision service using test scenarios Packaging and deploying an Red Hat Decision Manager project
null
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/developing_decision_services_in_red_hat_decision_manager/next_steps_2
E.2.3. Features of GRUB
E.2.3. Features of GRUB GRUB contains several features that make it preferable to other boot loaders available for the x86 architecture. Below is a partial list of some of the more important features: GRUB provides a true command-based, pre-OS environment on x86 machines. This feature affords the user maximum flexibility in loading operating systems with specified options or gathering information about the system. For years, many non-x86 architectures have employed pre-OS environments that allow system booting from a command line. GRUB supports Logical Block Addressing (LBA) mode. LBA places the addressing conversion used to find files in the hard drive's firmware, and is used on many IDE and all SCSI hard devices. Before LBA, boot loaders could encounter the 1024-cylinder BIOS limitation, where the BIOS could not find a file after the 1024 cylinder head of the disk. LBA support allows GRUB to boot operating systems from partitions beyond the 1024-cylinder limit, so long as the system BIOS supports LBA mode. Most modern BIOS revisions support LBA mode. GRUB can read ext2 partitions. This functionality allows GRUB to access its configuration file, /boot/grub/grub.conf , every time the system boots, eliminating the need for the user to write a new version of the first stage boot loader to the MBR when configuration changes are made. The only time a user needs to reinstall GRUB on the MBR is if the physical location of the /boot/ partition is moved on the disk.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/s2-grub-whatis-features
Chapter 2. Installing Metrics Store
Chapter 2. Installing Metrics Store Prerequisites Computing resources: 4 CPU cores 30 GB RAM 500 GB SSD disk For the Metrics Store Installer virtual machine: 4 CPU cores 8 GB RAM Note The computing resource requirements are for an all-in-one installation, with a single Metrics Store virtual machine. The all-in-one installation can collect data from up to 50 hosts, each running 20 virtual machines. Operating system: Red Hat Enterprise Linux 7.7 or later Software: Red Hat Virtualization 4.3.5 or later Network configuration: see Configuring networking for Metrics Store virtual machines 2.1. Creating the Metrics Store virtual machines To create the Metrics Store virtual machines, perform the following tasks: Configure the Metrics Store installation. Create the following Metrics Store virtual machines: The Metrics Store Installer virtual machine - a temporary virtual machine for deploying Red Hat OpenShift and services on the Metrics Store virtual machines. One or more Metrics Store virtual machines. Verify the Metrics Store virtual machines. 2.1.1. Configuring the Metrics Store installation Procedure Log in to the Manager machine using SSH. Update the packages: Copy metrics-store-config.yml.example to create metrics-store-config.yml : Edit the parameters in metrics-store-config.yml to match your installation environment, and save the file. The parameters are documented in the file. To set the logical network that is used for the metrics-store-installer and Metrics Store virtual machines, add the following lines to metrics-store-config.yml : On the Manager machine, copy /etc/ovirt-engine-metrics/secure_vars.yaml.example to /etc/ovirt-engine-metrics/secure_vars.yaml : Edit the parameters in /etc/ovirt-engine-metrics/secure_vars.yaml to match the details of your specific environment. Encrypt the secure_vars.yaml file: 2.1.2. Creating Metrics Store virtual machines Procedure Go to the ovirt-engine-metrics directory: Run the ovirt-metrics-store-installation playbook to create the virtual machines: Note To enable verbose mode for debugging, add -vvv to the end of the command, or add '-v' to enable light verbose mode, or add -vvvv to enable connection debugging. For more extensive debugging options, enable debugging through the Ansible playbook as described in Enable debugging via Ansible playbook 2.1.3. Verifying the creation of the virtual machines Procedure Log in to the Administration Portal. Click Compute Virtual Machines to verify that the metrics-store-installer virtual machine and the Metrics Store virtual machines are running. 2.1.4. Changing the default LDAP authentication identity provider (optional) In the standard Metrics Store installation, the allow_all identity provider is configured by default. You can change this default during installation by configuring the openshift_master_identity_providers parameter in the inventory file integ.ini . You can also configure the session options in the OAuth configuration in the integ.ini inventory file. Procedure Locate the integ.ini in the root directory of the metrics-store-installer virtual machine. Follow the instructions for updating the identity provider configuration in Configuring identity providers with Ansible . 2.2. Configuring networking for Metrics Store virtual machines 2.2.1. Configuring DNS resolution for Metrics Store virtual machines Procedure In the metrics-store-config.yml DNS zone parameter, public_hosted_zone should be defined as a wildcard DNS record ( *. example.com ). That wildcard DNS should resolve to the IP address of your master0 virtual machine. Add the hostnames of the Metrics Store virtual machines to your DNS server. 2.2.2. Setting a static MAC address for a Metrics Store virtual machine (optional) Procedure Log in to the Administration Portal. Click Compute Virtual Machines and select a Metrics Store virtual machine. In the Network Interfaces tab, select a NIC and click Edit . Select Custom MAC Address , enter the MAC address, and click OK . Reboot the virtual machine. 2.2.3. Configuring firewall ports The following table describes the firewall settings needed for communication between the ports used by Metrics Store. Table 2.1. Configure the firewall to allow connections to specific ports ID Port(s) Protocol Sources Destinations Purpose MS1 9200 TCP RHV Red Hat Virtualization Hosts RHV Manager Metrics Store VM Transfer data to ElasticSearch. MS2 5601 TCP Kibana user Metrics Store VM Give users access to the Kibana web interface. Note Whether a connection is encrypted or not depends on how you deployed the software. 2.3. Deploying Metrics Store services on Red Hat OpenShift Deploy and verify Red Hat OpenShift, Elasticsearch, Curator (for managing Elasticsearch indices and snapshots), and Kibana on the Metrics Store virtual machines. Procedure Log in to the metrics-store-installer virtual machine. Run the install_okd playbook to deploy Red Hat OpenShift and Metrics Store services to the Metrics Store virtual machines: Note To enable verbose mode for debugging, add -vvv to the end of the command, or add '-v' to enable light verbose mode, or add -vvvv to enable connection debugging. Verify the deployment by logging in to each Metrics Store virtual machine: Log in to the openshift-logging project: Check that the Elasticsearch, Curator, and Kibana pods are running: If Elasticsearch is not running, see Troubleshooting related to ElasticSearch in the OpenShift Container Platform 3.11 documentation. Check the Kibana host name and record it so you can access the Kibana console in Chapter 4, Verifying the Metrics Store installation : Cleanup Log in to the Administration Portal. Click Compute Virtual Machines and delete the metrics-store-installer virtual machine.
[ "yum update", "cp /etc/ovirt-engine-metrics/metrics-store-config.yml.example /etc/ovirt-engine-metrics/config.yml.d/metrics-store-config.yml", "ovirt_template_nics - the following are the default values for setting the logical network used by the metrics_store_installer and the Metrics Store virtual machines ovirt_template_nics: - name: nic1 profile_name: ovirtmgmt interface: virtio", "cp /etc/ovirt-engine-metrics/secure_vars.yaml.example /etc/ovirt-engine-metrics/secure_vars.yaml", "ansible-vault encrypt /etc/ovirt-engine-metrics/secure_vars.yaml", "cd /usr/share/ovirt-engine-metrics", "ANSIBLE_JINJA2_EXTENSIONS=\"jinja2.ext.do\" ./configure_ovirt_machines_for_metrics.sh --playbook=ovirt-metrics-store-installation.yml --ask-vault-pass", "ANSIBLE_CONFIG=\"/usr/share/ansible/openshift-ansible/ansible.cfg\" ANSIBLE_ROLES_PATH=\"/usr/share/ansible/roles/:/usr/share/ansible/openshift-ansible/roles\" ansible-playbook -i integ.ini install_okd.yaml -e @vars.yaml -e @secure_vars.yaml --ask-vault-pass", "oc project openshift-logging", "oc get pods", "oc get routes" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/metrics_store_installation_guide/installing_metrics_store
Chapter 10. Managing cache settings
Chapter 10. Managing cache settings Directory Server uses the following caches: The entry cache, which contains individual directory entries. The distinguished name (DN) cache is used to associate DNs and relative distinguished names (RDN) with entries. The database cache, which contains the database index files *.db files. For the highest performance improvements, all cache sizes must be able to store all of their records. If you do not use the recommended auto-sizing feature and have not enough RAM available, assign free memory to the caches in the previously shown order. 10.1. How the cache-autosize and cache-autosize-split parameters influence the database and entry cache sizes By default, Directory Server uses an auto-sizing feature to optimize the size of both the database and entry cache on the hardware resources of the server when the instance starts. Important Red Hat recommends to use the auto-sizing feature and not to set cache sizes manually. The following parameters in the cn=config,cn=ldbm database,cn=plugins,cn=config entry control the auto-sizing: nsslapd-cache-autosize These settings control if auto-sizing is enabled for the database and entry cache. Auto-sizing is enabled: For both the database and entry cache, if the nsslapd-cache-autosize parameter is set to a value greater than 0 . For the database cache, if the nsslapd-cache-autosize and nsslapd-dbcachesize parameters are set to 0 . For the entry cache, if the nsslapd-cache-autosize and nsslapd-cachememsize parameters are set to 0 . nsslapd-cache-autosize-split The value sets the percentage of RAM that Directory Server uses for the database cache. The server uses the remaining percentage for the entry cache. Using more than 1.5 GB RAM for the database cache does not improve the performance. Therefore, Directory Server limits the database cache to 1.5 GB. By default, Directory Server uses the following defaults values: nsslapd-cache-autosize: 25 nsslapd-cache-autosize-split: 25 nsslapd-dbcachesize: 1,536 MB Using these settings, 25% of the system's free RAM is used ( nsslapd-cache-autosize ). From this memory, the server uses 25% for the database cache ( nsslapd-cache-autosize-split ) and the remaining 75% for the entry cache. Depending on the free RAM, this results in the following cache sizes: Table 10.1. Cache sizes if both nsslapd-cache-autosize and nsslapd-cache-autosize-split use their default values GB of free RAM Database cache size Entry cache size 1 GB 64 MB 192 MB 2 GB 128 MB 384 MB 4 GB 256 MB 768 MB 8 GB 512 MB 1,536 MB 16 GB 1,024 MB 3,072 MB 32 GB 1,536 MB 6,656 MB 64 GB 1,536 MB 14,848 MB 128 GB 1,536 MB 31,232 MB 10.2. Required cache sizes The dsconf monitor dbmon command enables you to monitor cache statistics at runtime. To display the statistics, enter: # dsconf -D " cn=Directory Manager " ldap://server.example.com monitor dbmon DB Monitor Report: 2022-02-24 10:25:16 -------------------------------------------------------- Database Cache: - Cache Hit Ratio: 50% - Free Space: 397.31 KB - Free Percentage: 2.2% - RO Page Drops: 0 - Pages In: 2934772 - Pages Out: 219075 Normalized DN Cache: - Cache Hit Ratio: 60% - Free Space: 19.98 MB - Free Percentage: 99.9% - DN Count: 100000 - Evictions: 9282348 Backends: - dc=example,dc=com (userroot): - Entry Cache Hit Ratio: 66% - Entry Cache Count: 50000 - Entry Cache Free Space: 2.0 KB - Entry Cache Free Percentage: 0.8% - Entry Cache Average Size: 8.9 KB - DN Cache Hit Ratio: 21% - DN Cache Count: 100000 - DN Cache Free Space: 4.29 MB - DN Cache Free Percentage: 69.8% - DN Cache Average Size: 130.0 B Optionally, pass the -b back_end or -x option to the command to display the statistics for a specific back end or the index. If your caches are sufficiently sized, the number in DN Cache Count matches the values in the Cache Count backend entries. Additionally, if all of the entries and DNs fit within their respective caches, the Entry Cache Count count value matches the DN Cache Count value. The output of the example shows: Only 2.2% free database cache is left: Database Cache: ... - Free Space: 397.31 KB - Free Percentage: 2.2% However, to operate efficiently, at least 15% free database cache is required. To determine the optimal size of the database cache, calculate the sizes of all *.db files in the /var/lib/dirsrv/slapd- instance_name /db/ directory including subdirectories and the changelog database, and add 12% for overhead. To set the database cache, see Setting the database cache size using the command line . The DN cache of the userroot database is well-chosen: Backends: - dc=example,dc=com (userroot): ... - DN Cache Count: 100000 - DN Cache Free Space: 4.29 MB - DN Cache Free Percentage: 69.8% - DN Cache Average Size: 130.0 B The DN cache of the database contains 100000 records, 69,8% of the cache is free, and each DN in memory requires 130 bytes on average. To set the DN cache, see Setting the DN cache size using the command line . The statistics on the entry cache of the userroot database indicates that the entry cache value should be increased for better performance: Backends: - dc=example,dc=com (userroot): ... - Entry Cache Count: 50000 - Entry Cache Free Space: 2.0 KB - Entry Cache Free Percentage: 0.8% - Entry Cache Average Size: 8.9 KB The entry cache contains in this database 50000 records and only 2 Kilobytes of free space are left. To enable Directory Server to cache all 100000 DNs, the cache must be increased to minimum of 890 MB (100000 DNs * 8,9 KB average entry size). However, Red Hat recommends to round the minimum required size to the highest GB and double the result. In this example, the entry cache should be set to 2 Gigabytes. To set the entry cache, see Setting the entry cache size using the command line . 10.3. Setting the database cache size using the command line The database cache contains the Berkeley database index files for the database, meaning all of the *.db and other files used for attribute indexing by the database. This value is passed to the Berkeley DB API function set_cachesize() . This cache size has less of an impact on Directory Server performance than the entry cache size, but if there is available RAM after the entry cache size is set, increase the amount of memory allocated to the database cache. Procedure Disable automatic cache tuning # dsconf -D " cn=Directory Manager " ldap://server.example.com backend config set --cache-autosize=0 Manually set the database cache size: # dsconf -D " cn=Directory Manager " ldap://server.example.com backend config set --dbcachesize= 268435456 Specify the database cache size in bytes. In this example, the command sets the database cache to 256 MB. Restart the instance: # dsctl instance_name restart 10.4. Setting the database cache size using the web console The database cache contains the Berkeley database index files for the database, meaning all of the *.db and other files used for attribute indexing by the database. This value is passed to the Berkeley DB API function set_cachesize() . This cache size has less of an impact on Directory Server performance than the entry cache size, but if there is available RAM after the entry cache size is set, increase the amount of memory allocated to the database cache. Prerequisites You are logged in to the instance in the web console. Procedure Navigate to Database Global Database Configuration . Deselect Automatic Cache Tuning . Click Save Config . Enter the database cache size in bytes, such as 268435456 for 256 MB, into the Database Cache Size field. Click Save Config . Click Actions in the top right corner, and select Restart Instance . 10.5. Setting the DN cache size using the command line Directory Server uses the entryrdn index to associate distinguished names (DN) and relative distinguished names (RDN) with entries. It enables the server to efficiently rename subtrees, move entries, and perform moddn operations. The server uses the DN cache to cache the in-memory representation of the entryrdn index to avoid expensive file I/O and transformation operations. If you do not use the auto-tuning feature, for best performance, especially with but not limited to renaming entries and moving operations, set the DN cache to a size that enables Directory Server to cache all DNs in the database. If a DN is not stored in the cache, Directory Server reads the DN from the entryrdn.db index database file and converts the DNs from the on-disk format to the in-memory format. DNs that are stored in the cache enable the server to skip the disk I/O and conversion steps. Procedure Display the suffixes and their corresponding back end: # dsconf -D " cn=Directory Manager " ldap://server.example.com suffix list dc=example,dc=com (userroot) This command displays the name of the back end database to each suffix. You require the suffix's database name in the step. Set the DN cache size: # dsconf -D " cn=Directory Manager " ldap://server.example.com backend suffix set --dncache-memsize= 20971520 userRoot This command sets the DN cache for the userRoot database to 20 megabytes. Restart the instance: # dsctl instance_name restart 10.6. Setting the DN cache size using the web console Directory Server uses the entryrdn index to associate distinguished names (DN) and relative distinguished names (RDN) with entries. It enables the server to efficiently rename subtrees, move entries, and perform moddn operations. The server uses the DN cache to cache the in-memory representation of the entryrdn index to avoid expensive file I/O and transformation operations. If you do not use the auto-tuning feature, for best performance, especially with but not limited to renaming entries and moving operations, set the DN cache to a size that enables Directory Server to cache all DNs in the database. If a DN is not stored in the cache, Directory Server reads the DN from the entryrdn.db index database file and converts the DNs from the on-disk format to the in-memory format. DNs that are stored in the cache enable the server to skip the disk I/O and conversion steps. Prerequisites You are logged in to the instance in the web console. Procedure Navigate to Database Suffixes suffix_name . Enter the DN cache size in bytes to the DN Cache Size field. Click Save Configuration . Click Actions in the top right corner, and select Restart Instance . 10.7. Setting the entry cache size using the command line Directory Server uses the entry cache to store directory entries that are used during search and read operations. Setting the entry cache to a size that enables Directory Server to store all records has the highest performance impact on search operations. If entry caching is not configured, Directory Server reads the entry from the id2entry.db database file and converts the distinguished names (DN) from the on-disk format to the in-memory format. Entries that are stored in the cache enable the server to skip the disk I/O and conversion steps. Procedure Disable automatic cache tuning: # dsconf -D " cn=Directory Manager " ldap://server.example.com backend config set --cache-autosize=0 Display the suffixes and their corresponding back end: # dsconf -D " cn=Directory Manager " ldap://server.example.com suffix list dc=example,dc=com (userroot) This command displays the name of the back end database to each suffix. You require the suffix's database name in the step. Set the entry cache size in bytes for the database: # dsconf -D " cn=Directory Manager " ldap://server.example.com backend suffix set --cache-memsize= 2147483648 userRoot This command sets the entry cache for the userRoot database to 2 gigabytes. Restart the instance: # dsctl instance_name restart 10.8. Setting the entry cache size using the web console Directory Server uses the entry cache to store directory entries that are used during search and read operations. Setting the entry cache to a size that enables Directory Server to store all records has the highest performance impact on search operations. If entry caching is not configured, Directory Server reads the entry from the id2entry.db database file and converts the distinguished names (DN) from the on-disk format to the in-memory format. Entries that are stored in the cache enable the server to skip the disk I/O and conversion steps. Prerequisites You are logged in to the instance in the web console. Procedure Navigate to Database Suffixes suffix_name Settings . Disable the Automatic Cache Tuning setting. Click Save Configuration . Click Actions in the top right corner, and select Restart Instance . Navigate to Database Suffixes suffix_name Settings . Set the size of the database cache in the `Entry Cache Size field. Click Save Configuration . Click Actions in the top right corner, and select Restart Instance .
[ "dsconf -D \" cn=Directory Manager \" ldap://server.example.com monitor dbmon DB Monitor Report: 2022-02-24 10:25:16 -------------------------------------------------------- Database Cache: - Cache Hit Ratio: 50% - Free Space: 397.31 KB - Free Percentage: 2.2% - RO Page Drops: 0 - Pages In: 2934772 - Pages Out: 219075 Normalized DN Cache: - Cache Hit Ratio: 60% - Free Space: 19.98 MB - Free Percentage: 99.9% - DN Count: 100000 - Evictions: 9282348 Backends: - dc=example,dc=com (userroot): - Entry Cache Hit Ratio: 66% - Entry Cache Count: 50000 - Entry Cache Free Space: 2.0 KB - Entry Cache Free Percentage: 0.8% - Entry Cache Average Size: 8.9 KB - DN Cache Hit Ratio: 21% - DN Cache Count: 100000 - DN Cache Free Space: 4.29 MB - DN Cache Free Percentage: 69.8% - DN Cache Average Size: 130.0 B", "Database Cache: - Free Space: 397.31 KB - Free Percentage: 2.2%", "Backends: - dc=example,dc=com (userroot): - DN Cache Count: 100000 - DN Cache Free Space: 4.29 MB - DN Cache Free Percentage: 69.8% - DN Cache Average Size: 130.0 B", "Backends: - dc=example,dc=com (userroot): - Entry Cache Count: 50000 - Entry Cache Free Space: 2.0 KB - Entry Cache Free Percentage: 0.8% - Entry Cache Average Size: 8.9 KB", "dsconf -D \" cn=Directory Manager \" ldap://server.example.com backend config set --cache-autosize=0", "dsconf -D \" cn=Directory Manager \" ldap://server.example.com backend config set --dbcachesize= 268435456", "dsctl instance_name restart", "dsconf -D \" cn=Directory Manager \" ldap://server.example.com suffix list dc=example,dc=com (userroot)", "dsconf -D \" cn=Directory Manager \" ldap://server.example.com backend suffix set --dncache-memsize= 20971520 userRoot", "dsctl instance_name restart", "dsconf -D \" cn=Directory Manager \" ldap://server.example.com backend config set --cache-autosize=0", "dsconf -D \" cn=Directory Manager \" ldap://server.example.com suffix list dc=example,dc=com (userroot)", "dsconf -D \" cn=Directory Manager \" ldap://server.example.com backend suffix set --cache-memsize= 2147483648 userRoot", "dsctl instance_name restart" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/12/html/tuning_the_performance_of_red_hat_directory_server/assembly_managing-cache-settings_assembly_improving-the-performance-of-views
Chapter 8. How to use dedicated worker nodes for Red Hat OpenShift Data Foundation
Chapter 8. How to use dedicated worker nodes for Red Hat OpenShift Data Foundation Any Red Hat OpenShift Container Platform subscription requires an OpenShift Data Foundation subscription. However, you can save on the OpenShift Container Platform subscription costs if you are using infrastructure nodes to schedule OpenShift Data Foundation resources. It is important to maintain consistency across environments with or without Machine API support. Because of this, it is highly recommended in all cases to have a special category of nodes labeled as either worker or infra or have both roles. See the Section 8.3, "Manual creation of infrastructure nodes" section for more information. 8.1. Anatomy of an Infrastructure node Infrastructure nodes for use with OpenShift Data Foundation have a few attributes. The infra node-role label is required to ensure the node does not consume RHOCP entitlements. The infra node-role label is responsible for ensuring only OpenShift Data Foundation entitlements are necessary for the nodes running OpenShift Data Foundation. Labeled with node-role.kubernetes.io/infra Adding an OpenShift Data Foundation taint with a NoSchedule effect is also required so that the infra node will only schedule OpenShift Data Foundation resources. Tainted with node.ocs.openshift.io/storage="true" The label identifies the RHOCP node as an infra node so that RHOCP subscription cost is not applied. The taint prevents non OpenShift Data Foundation resources to be scheduled on the tainted nodes. Note Adding storage taint on nodes might require toleration handling for the other daemonset pods such as openshift-dns daemonset . For information about how to manage the tolerations, see Knowledgebase article: https://access.redhat.com/solutions/6592171 . Example of the taint and labels required on infrastructure node that will be used to run OpenShift Data Foundation services: 8.2. Machine sets for creating Infrastructure nodes If the Machine API is supported in the environment, then labels should be added to the templates for the Machine Sets that will be provisioning the infrastructure nodes. Avoid the anti-pattern of adding labels manually to nodes created by the machine API. Doing so is analogous to adding labels to pods created by a deployment. In both cases, when the pod/node fails, the replacement pod/node will not have the appropriate labels. Note In EC2 environments, you will need three machine sets, each configured to provision infrastructure nodes in a distinct availability zone (such as us-east-2a, us-east-2b, us-east-2c). Currently, OpenShift Data Foundation does not support deploying in more than three availability zones. The following Machine Set template example creates nodes with the appropriate taint and labels required for infrastructure nodes. This will be used to run OpenShift Data Foundation services. Important If you add a taint to the infrastructure nodes, you also need to add tolerations to the taint for other workloads, for example, the fluentd pods. For more information, see the Red Hat Knowledgebase solution Infrastructure Nodes in OpenShift 4 . 8.3. Manual creation of infrastructure nodes Only when the Machine API is not supported in the environment should labels be directly applied to nodes. Manual creation requires that at least 3 RHOCP worker nodes are available to schedule OpenShift Data Foundation services, and that these nodes have sufficient CPU and memory resources. To avoid the RHOCP subscription cost, the following is required: Adding a NoSchedule OpenShift Data Foundation taint is also required so that the infra node will only schedule OpenShift Data Foundation resources and repel any other non-OpenShift Data Foundation workloads. Warning Do not remove the node-role node-role.kubernetes.io/worker="" The removal of the node-role.kubernetes.io/worker="" can cause issues unless changes are made both to the OpenShift scheduler and to MachineConfig resources. If already removed, it should be added again to each infra node. Adding node-role node-role.kubernetes.io/infra="" and OpenShift Data Foundation taint is sufficient to conform to entitlement exemption requirements. 8.4. Taint a node from the user interface This section explains the procedure to taint nodes after the OpenShift Data Foundation deployment. Procedure In the OpenShift Web Console, click Compute Nodes , and then select the node which has to be tainted. In the Details page click on Edit taints . Enter the values in the Key <node.ocs.openshift.io/storage>, Value <true> and in the Effect <Noschedule> field. Click Save. Verification steps Follow the steps to verify that the node has tainted successfully: Navigate to Compute Nodes . Select the node to verify its status, and then click on the YAML tab. In the specs section check the values of the following parameters: Additional resources For more information, refer to Creating the OpenShift Data Foundation cluster on VMware vSphere .
[ "spec: taints: - effect: NoSchedule key: node.ocs.openshift.io/storage value: \"true\" metadata: creationTimestamp: null labels: node-role.kubernetes.io/worker: \"\" node-role.kubernetes.io/infra: \"\" cluster.ocs.openshift.io/openshift-storage: \"\"", "template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: kb-s25vf machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: kb-s25vf-infra-us-west-2a spec: taints: - effect: NoSchedule key: node.ocs.openshift.io/storage value: \"true\" metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: \"\" cluster.ocs.openshift.io/openshift-storage: \"\"", "label node <node> node-role.kubernetes.io/infra=\"\" label node <node> cluster.ocs.openshift.io/openshift-storage=\"\"", "adm taint node <node> node.ocs.openshift.io/storage=\"true\":NoSchedule", "Taints: Key: node.ocs.openshift.io/storage Value: true Effect: Noschedule" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/managing_and_allocating_storage_resources/how-to-use-dedicated-worker-nodes-for-openshift-data-foundation_rhodf
Chapter 2. Understanding disconnected installation mirroring
Chapter 2. Understanding disconnected installation mirroring You can use a mirror registry for disconnected installations and to ensure that your clusters only use container images that satisfy your organization's controls on external content. Before you install a cluster on infrastructure that you provision in a disconnected environment, you must mirror the required container images into that environment. To mirror container images, you must have a registry for mirroring. 2.1. Mirroring images for a disconnected installation through the Agent-based Installer You can use one of the following procedures to mirror your OpenShift Container Platform image repository to your mirror registry: Mirroring images for a disconnected installation Mirroring images for a disconnected installation using the oc-mirror plugin 2.2. About mirroring the OpenShift Container Platform image repository for a disconnected registry To use mirror images for a disconnected installation with the Agent-based Installer, you must modify the install-config.yaml file. You can mirror the release image by using the output of either the oc adm release mirror or oc mirror command. This is dependent on which command you used to set up the mirror registry. The following example shows the output of the oc adm release mirror command. USD oc adm release mirror Example output To use the new mirrored repository to install, add the following section to the install-config.yaml: imageContentSources: mirrors: virthost.ostest.test.metalkube.org:5000/localimages/local-release-image source: quay.io/openshift-release-dev/ocp-v4.0-art-dev mirrors: virthost.ostest.test.metalkube.org:5000/localimages/local-release-image source: registry.ci.openshift.org/ocp/release The following example shows part of the imageContentSourcePolicy.yaml file generated by the oc-mirror plugin. The file can be found in the results directory, for example oc-mirror-workspace/results-1682697932/ . Example imageContentSourcePolicy.yaml file spec: repositoryDigestMirrors: - mirrors: - virthost.ostest.test.metalkube.org:5000/openshift/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev - mirrors: - virthost.ostest.test.metalkube.org:5000/openshift/release-images source: quay.io/openshift-release-dev/ocp-release 2.2.1. Configuring the Agent-based Installer to use mirrored images You must use the output of either the oc adm release mirror command or the oc-mirror plugin to configure the Agent-based Installer to use mirrored images. Procedure If you used the oc-mirror plugin to mirror your release images: Open the imageContentSourcePolicy.yaml located in the results directory, for example oc-mirror-workspace/results-1682697932/ . Copy the text in the repositoryDigestMirrors section of the yaml file. If you used the oc adm release mirror command to mirror your release images: Copy the text in the imageContentSources section of the command output. Paste the copied text into the imageContentSources field of the install-config.yaml file. Add the certificate file used for the mirror registry to the additionalTrustBundle field of the yaml file. Important The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority, or the self-signed certificate that you generated for the mirror registry. Example install-config.yaml file additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- If you are using GitOps ZTP manifests: add the registries.conf and ca-bundle.crt files to the mirror path to add the mirror configuration in the agent ISO image. Note You can create the registries.conf file from the output of either the oc adm release mirror command or the oc mirror plugin. The format of the /etc/containers/registries.conf file has changed. It is now version 2 and in TOML format. Example registries.conf file [[registry]] location = "registry.ci.openshift.org/ocp/release" mirror-by-digest-only = true [[registry.mirror]] location = "virthost.ostest.test.metalkube.org:5000/localimages/local-release-image" [[registry]] location = "quay.io/openshift-release-dev/ocp-v4.0-art-dev" mirror-by-digest-only = true [[registry.mirror]] location = "virthost.ostest.test.metalkube.org:5000/localimages/local-release-image"
[ "oc adm release mirror", "To use the new mirrored repository to install, add the following section to the install-config.yaml: imageContentSources: mirrors: virthost.ostest.test.metalkube.org:5000/localimages/local-release-image source: quay.io/openshift-release-dev/ocp-v4.0-art-dev mirrors: virthost.ostest.test.metalkube.org:5000/localimages/local-release-image source: registry.ci.openshift.org/ocp/release", "spec: repositoryDigestMirrors: - mirrors: - virthost.ostest.test.metalkube.org:5000/openshift/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev - mirrors: - virthost.ostest.test.metalkube.org:5000/openshift/release-images source: quay.io/openshift-release-dev/ocp-release", "additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----", "[[registry]] location = \"registry.ci.openshift.org/ocp/release\" mirror-by-digest-only = true [[registry.mirror]] location = \"virthost.ostest.test.metalkube.org:5000/localimages/local-release-image\" [[registry]] location = \"quay.io/openshift-release-dev/ocp-v4.0-art-dev\" mirror-by-digest-only = true [[registry.mirror]] location = \"virthost.ostest.test.metalkube.org:5000/localimages/local-release-image\"" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installing_an_on-premise_cluster_with_the_agent-based_installer/understanding-disconnected-installation-mirroring
3.2. Types
3.2. Types The main permission control method used in SELinux targeted policy to provide advanced process isolation is Type Enforcement. All files and processes are labeled with a type: types define a SELinux domain for processes and a SELinux type for files. SELinux policy rules define how types access each other, whether it be a domain accessing a type, or a domain accessing another domain. Access is only allowed if a specific SELinux policy rule exists that allows it. Label files with the samba_share_t type to allow Samba to share them. Only label files you have created, and do not relabel system files with the samba_share_t type: Booleans can be enabled to share such files and directories. SELinux allows Samba to write to files labeled with the samba_share_t type, as long as /etc/samba/smb.conf and Linux permissions are set accordingly. The samba_etc_t type is used on certain files in /etc/samba/ , such as smb.conf . Do not manually label files with the samba_etc_t type. If files in /etc/samba/ are not labeled correctly, run the restorecon -R -v /etc/samba command as the root user to restore such files to their default contexts. If /etc/samba/smb.conf is not labeled with the samba_etc_t type, the service smb start command may fail and an SELinux denial may be logged. The following is an example denial when /etc/samba/smb.conf was labeled with the httpd_sys_content_t type:
[ "setroubleshoot: SELinux is preventing smbd (smbd_t) \"read\" to ./smb.conf (httpd_sys_content_t). For complete SELinux messages. run sealert -l deb33473-1069-482b-bb50-e4cd05ab18af" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/managing_confined_services/sect-managing_confined_services-samba-types
Providing feedback on Red Hat build of OpenJDK documentation
Providing feedback on Red Hat build of OpenJDK documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_red_hat_build_of_openjdk_11.0.10/proc-providing-feedback-on-redhat-documentation
Chapter 6. Connecting applications to services
Chapter 6. Connecting applications to services 6.1. Release notes for Service Binding Operator The Service Binding Operator consists of a controller and an accompanying custom resource definition (CRD) for service binding. It manages the data plane for workloads and backing services. The Service Binding Controller reads the data made available by the control plane of backing services. Then, it projects this data to workloads according to the rules specified through the ServiceBinding resource. With Service Binding Operator, you can: Bind your workloads together with Operator-managed backing services. Automate configuration of binding data. Provide service operators a low-touch administrative experience to provision and manage access to services. Enrich development lifecycle with a consistent and declarative service binding method that eliminates discrepancies in cluster environments. The custom resource definition (CRD) of the Service Binding Operator supports the following APIs: Service Binding with the binding.operators.coreos.com API group. Service Binding (Spec API) with the servicebinding.io API group. 6.1.1. Support matrix Some features in the following table are in Technology Preview . These experimental features are not intended for production use. In the table, features are marked with the following statuses: TP : Technology Preview GA : General Availability Note the following scope of support on the Red Hat Customer Portal for these features: Table 6.1. Support matrix Service Binding Operator API Group and Support Status OpenShift Versions Version binding.operators.coreos.com servicebinding.io 1.3.3 GA GA 4.9-4.12 1.3.1 GA GA 4.9-4.11 1.3 GA GA 4.9-4.11 1.2 GA GA 4.7-4.11 1.1.1 GA TP 4.7-4.10 1.1 GA TP 4.7-4.10 1.0.1 GA TP 4.7-4.9 1.0 GA TP 4.7-4.9 6.1.2. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see Red Hat CTO Chris Wright's message . 6.1.3. Release notes for Service Binding Operator 1.3.3 Service Binding Operator 1.3.3 is now available on OpenShift Container Platform 4.9, 4.10, 4.11 and 4.12. 6.1.3.1. Fixed issues Before this update, a security vulnerability CVE-2022-41717 was noted for Service Binding Operator. This update fixes the CVE-2022-41717 error and updates the golang.org/x/net package from v0.0.0-20220906165146-f3363e06e74c to v0.4.0. APPSVC-1256 Before this update, Provisioned Services were only detected if the respective resource had the "servicebinding.io/provisioned-service: true" annotation set while other Provisioned Services were missed. With this update, the detection mechanism identifies all Provisioned Services correctly based on the "status.binding.name" attribute. APPSVC-1204 6.1.4. Release notes for Service Binding Operator 1.3.1 Service Binding Operator 1.3.1 is now available on OpenShift Container Platform 4.9, 4.10, and 4.11. 6.1.4.1. Fixed issues Before this update, a security vulnerability CVE-2022-32149 was noted for Service Binding Operator. This update fixes the CVE-2022-32149 error and updates the golang.org/x/text package from v0.3.7 to v0.3.8. APPSVC-1220 6.1.5. Release notes for Service Binding Operator 1.3 Service Binding Operator 1.3 is now available on OpenShift Container Platform 4.9, 4.10, and 4.11. 6.1.5.1. Removed functionality In Service Binding Operator 1.3, the Operator Lifecycle Manager (OLM) descriptor feature has been removed to improve resource utilization. As an alternative to OLM descriptors, you can use CRD annotations to declare binding data. 6.1.6. Release notes for Service Binding Operator 1.2 Service Binding Operator 1.2 is now available on OpenShift Container Platform 4.7, 4.8, 4.9, 4.10, and 4.11. 6.1.6.1. New features This section highlights what is new in Service Binding Operator 1.2: Enable Service Binding Operator to consider optional fields in the annotations by setting the optional flag value to true . Support for servicebinding.io/v1beta1 resources. Improvements to the discoverability of bindable services by exposing the relevant binding secret without requiring a workload to be present. 6.1.6.2. Known issues Currently, when you install Service Binding Operator on OpenShift Container Platform 4.11, the memory footprint of Service Binding Operator increases beyond expected limits. With low usage, however, the memory footprint stays within the expected ranges of your environment or scenarios. In comparison with OpenShift Container Platform 4.10, under stress, both the average and maximum memory footprint increase considerably. This issue is evident in the versions of Service Binding Operator as well. There is currently no workaround for this issue. APPSVC-1200 By default, the projected files get their permissions set to 0644. Service Binding Operator cannot set specific permissions due to a bug in Kubernetes that causes issues if the service expects specific permissions such as, 0600 . As a workaround, you can modify the code of the program or the application that is running inside a workload resource to copy the file to the /tmp directory and set the appropriate permissions. APPSVC-1127 There is currently a known issue with installing Service Binding Operator in a single namespace installation mode. The absence of an appropriate namespace-scoped role-based access control (RBAC) rule prevents the successful binding of an application to a few known Operator-backed services that the Service Binding Operator can automatically detect and bind to. When this happens, it generates an error message similar to the following example: Example error message `postgresclusters.postgres-operator.crunchydata.com "hippo" is forbidden: User "system:serviceaccount:my-petclinic:service-binding-operator" cannot get resource "postgresclusters" in API group "postgres-operator.crunchydata.com" in the namespace "my-petclinic"` Workaround 1: Install the Service Binding Operator in the all namespaces installation mode. As a result, the appropriate cluster-scoped RBAC rule now exists and the binding succeeds. Workaround 2: If you cannot install the Service Binding Operator in the all namespaces installation mode, install the following role binding into the namespace where the Service Binding Operator is installed: Example: Role binding for Crunchy Postgres Operator kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: service-binding-crunchy-postgres-viewer subjects: - kind: ServiceAccount name: service-binding-operator roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: service-binding-crunchy-postgres-viewer-role APPSVC-1062 According to the specification, when you change the ClusterWorkloadResourceMapping resources, Service Binding Operator must use the version of the ClusterWorkloadResourceMapping resource to remove the binding data that was being projected until now. Currently, when you change the ClusterWorkloadResourceMapping resources, the Service Binding Operator uses the latest version of the ClusterWorkloadResourceMapping resource to remove the binding data. As a result, {the servicebinding-title} might remove the binding data incorrectly. As a workaround, perform the following steps: Delete any ServiceBinding resources that use the corresponding ClusterWorkloadResourceMapping resource. Modify the ClusterWorkloadResourceMapping resource. Re-apply the ServiceBinding resources that you previously removed in step 1. APPSVC-1102 6.1.7. Release notes for Service Binding Operator 1.1.1 Service Binding Operator 1.1.1 is now available on OpenShift Container Platform 4.7, 4.8, 4.9, and 4.10. 6.1.7.1. Fixed issues Before this update, a security vulnerability CVE-2021-38561 was noted for Service Binding Operator Helm chart. This update fixes the CVE-2021-38561 error and updates the golang.org/x/text package from v0.3.6 to v0.3.7. APPSVC-1124 Before this update, users of the Developer Sandbox did not have sufficient permissions to read ClusterWorkloadResourceMapping resources. As a result, Service Binding Operator prevented all service bindings from being successful. With this update, the Service Binding Operator now includes the appropriate role-based access control (RBAC) rules for any authenticated subject including the Developer Sandbox users. These RBAC rules allow the Service Binding Operator to get , list , and watch the ClusterWorkloadResourceMapping resources for the Developer Sandbox users and to process service bindings successfully. APPSVC-1135 6.1.7.2. Known issues There is currently a known issue with installing Service Binding Operator in a single namespace installation mode. The absence of an appropriate namespace-scoped role-based access control (RBAC) rule prevents the successful binding of an application to a few known Operator-backed services that the Service Binding Operator can automatically detect and bind to. When this happens, it generates an error message similar to the following example: Example error message `postgresclusters.postgres-operator.crunchydata.com "hippo" is forbidden: User "system:serviceaccount:my-petclinic:service-binding-operator" cannot get resource "postgresclusters" in API group "postgres-operator.crunchydata.com" in the namespace "my-petclinic"` Workaround 1: Install the Service Binding Operator in the all namespaces installation mode. As a result, the appropriate cluster-scoped RBAC rule now exists and the binding succeeds. Workaround 2: If you cannot install the Service Binding Operator in the all namespaces installation mode, install the following role binding into the namespace where the Service Binding Operator is installed: Example: Role binding for Crunchy Postgres Operator kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: service-binding-crunchy-postgres-viewer subjects: - kind: ServiceAccount name: service-binding-operator roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: service-binding-crunchy-postgres-viewer-role APPSVC-1062 Currently, when you modify the ClusterWorkloadResourceMapping resources, the Service Binding Operator does not implement correct behavior. As a workaround, perform the following steps: Delete any ServiceBinding resources that use the corresponding ClusterWorkloadResourceMapping resource. Modify the ClusterWorkloadResourceMapping resource. Re-apply the ServiceBinding resources that you previously removed in step 1. APPSVC-1102 6.1.8. Release notes for Service Binding Operator 1.1 Service Binding Operator is now available on OpenShift Container Platform 4.7, 4.8, 4.9, and 4.10. 6.1.8.1. New features This section highlights what is new in Service Binding Operator 1.1: Service Binding Options Workload resource mapping: Define exactly where binding data needs to be projected for the secondary workloads. Bind new workloads using a label selector. 6.1.8.2. Fixed issues Before this update, service bindings that used label selectors to pick up workloads did not project service binding data into the new workloads that matched the given label selectors. As a result, the Service Binding Operator could not periodically bind such new workloads. With this update, service bindings now project service binding data into the new workloads that match the given label selector. The Service Binding Operator now periodically attempts to find and bind such new workloads. APPSVC-1083 6.1.8.3. Known issues There is currently a known issue with installing Service Binding Operator in a single namespace installation mode. The absence of an appropriate namespace-scoped role-based access control (RBAC) rule prevents the successful binding of an application to a few known Operator-backed services that the Service Binding Operator can automatically detect and bind to. When this happens, it generates an error message similar to the following example: Example error message `postgresclusters.postgres-operator.crunchydata.com "hippo" is forbidden: User "system:serviceaccount:my-petclinic:service-binding-operator" cannot get resource "postgresclusters" in API group "postgres-operator.crunchydata.com" in the namespace "my-petclinic"` Workaround 1: Install the Service Binding Operator in the all namespaces installation mode. As a result, the appropriate cluster-scoped RBAC rule now exists and the binding succeeds. Workaround 2: If you cannot install the Service Binding Operator in the all namespaces installation mode, install the following role binding into the namespace where the Service Binding Operator is installed: Example: Role binding for Crunchy Postgres Operator kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: service-binding-crunchy-postgres-viewer subjects: - kind: ServiceAccount name: service-binding-operator roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: service-binding-crunchy-postgres-viewer-role APPSVC-1062 Currently, when you modify the ClusterWorkloadResourceMapping resources, the Service Binding Operator does not implement correct behavior. As a workaround, perform the following steps: Delete any ServiceBinding resources that use the corresponding ClusterWorkloadResourceMapping resource. Modify the ClusterWorkloadResourceMapping resource. Re-apply the ServiceBinding resources that you previously removed in step 1. APPSVC-1102 6.1.9. Release notes for Service Binding Operator 1.0.1 Service Binding Operator is now available on OpenShift Container Platform 4.7, 4.8 and 4.9. Service Binding Operator 1.0.1 supports OpenShift Container Platform 4.9 and later running on: IBM Power Systems IBM Z and LinuxONE The custom resource definition (CRD) of the Service Binding Operator 1.0.1 supports the following APIs: Service Binding with the binding.operators.coreos.com API group. Service Binding (Spec API Tech Preview) with the servicebinding.io API group. Important Service Binding (Spec API Tech Preview) with the servicebinding.io API group is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 6.1.9.1. Support matrix Some features in this release are currently in Technology Preview. These experimental features are not intended for production use. Technology Preview Features Support Scope In the table below, features are marked with the following statuses: TP : Technology Preview GA : General Availability Note the following scope of support on the Red Hat Customer Portal for these features: Table 6.2. Support matrix Feature Service Binding Operator 1.0.1 binding.operators.coreos.com API group GA servicebinding.io API group TP 6.1.9.2. Fixed issues Before this update, binding the data values from a Cluster custom resource (CR) of the postgresql.k8s.enterpriesedb.io/v1 API collected the host binding value from the .metadata.name field of the CR. The collected binding value is an incorrect hostname and the correct hostname is available at the .status.writeService field. With this update, the annotations that the Service Binding Operator uses to expose the binding data values from the backing service CR are now modified to collect the host binding value from the .status.writeService field. The Service Binding Operator uses these modified annotations to project the correct hostname in the host and provider bindings. APPSVC-1040 Before this update, when you would bind a PostgresCluster CR of the postgres-operator.crunchydata.com/v1beta1 API, the binding data values did not include the values for the database certificates. As a result, the application failed to connect to the database. With this update, modifications to the annotations that the Service Binding Operator uses to expose the binding data from the backing service CR now include the database certificates. The Service Binding Operator uses these modified annotations to project the correct ca.crt , tls.crt , and tls.key certificate files. APPSVC-1045 Before this update, when you would bind a PerconaXtraDBCluster custom resource (CR) of the pxc.percona.com API, the binding data values did not include the port and database values. These binding values along with the others already projected are necessary for an application to successfully connect to the database service. With this update, the annotations that the Service Binding Operator uses to expose the binding data values from the backing service CR are now modified to project the additional port and database binding values. The Service Binding Operator uses these modified annotations to project the complete set of binding values that the application can use to successfully connect to the database service. APPSVC-1073 6.1.9.3. Known issues Currently, when you install the Service Binding Operator in the single namespace installation mode, the absence of an appropriate namespace-scoped role-based access control (RBAC) rule prevents the successful binding of an application to a few known Operator-backed services that the Service Binding Operator can automatically detect and bind to. In addition, the following error message is generated: Example error message Workaround 1: Install the Service Binding Operator in the all namespaces installation mode. As a result, the appropriate cluster-scoped RBAC rule now exists and the binding succeeds. Workaround 2: If you cannot install the Service Binding Operator in the all namespaces installation mode, install the following role binding into the namespace where the Service Binding Operator is installed: Example: Role binding for Crunchy Postgres Operator kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: service-binding-crunchy-postgres-viewer subjects: - kind: ServiceAccount name: service-binding-operator roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: service-binding-crunchy-postgres-viewer-role APPSVC-1062 6.1.10. Release notes for Service Binding Operator 1.0 Service Binding Operator is now available on OpenShift Container Platform 4.7, 4.8 and 4.9. The custom resource definition (CRD) of the Service Binding Operator 1.0 supports the following APIs: Service Binding with the binding.operators.coreos.com API group. Service Binding (Spec API Tech Preview) with the servicebinding.io API group. Important Service Binding (Spec API Tech Preview) with the servicebinding.io API group is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 6.1.10.1. Support matrix Some features in this release are currently in Technology Preview. These experimental features are not intended for production use. Technology Preview Features Support Scope In the table below, features are marked with the following statuses: TP : Technology Preview GA : General Availability Note the following scope of support on the Red Hat Customer Portal for these features: Table 6.3. Support matrix Feature Service Binding Operator 1.0 binding.operators.coreos.com API group GA servicebinding.io API group TP 6.1.10.2. New features Service Binding Operator 1.0 supports OpenShift Container Platform 4.9 and later running on: IBM Power Systems IBM Z and LinuxONE This section highlights what is new in Service Binding Operator 1.0: Exposal of binding data from services Based on annotations present in CRD, custom resources (CRs), or resources. Based on descriptors present in Operator Lifecycle Manager (OLM) descriptors. Support for provisioned services Workload projection Projection of binding data as files, with volume mounts. Projection of binding data as environment variables. Service Binding Options Bind backing services in a namespace that is different from the workload namespace. Project binding data into the specific container workloads. Auto-detection of the binding data from resources owned by the backing service CR. Compose custom binding data from the exposed binding data. Support for non- PodSpec compliant workload resources. Security Support for role-based access control (RBAC). 6.1.11. Additional resources Understanding Service Binding Operator . 6.2. Understanding Service Binding Operator Application developers need access to backing services to build and connect workloads. Connecting workloads to backing services is always a challenge because each service provider suggests a different way to access their secrets and consume them in a workload. In addition, manual configuration and maintenance of this binding together of workloads and backing services make the process tedious, inefficient, and error-prone. The Service Binding Operator enables application developers to easily bind workloads together with Operator-managed backing services, without any manual procedures to configure the binding connection. 6.2.1. Service Binding terminology This section summarizes the basic terms used in Service Binding. Service binding The representation of the action of providing information about a service to a workload. Examples include establishing the exchange of credentials between a Java application and a database that it requires. Backing service Any service or software that the application consumes over the network as part of its normal operation. Examples include a database, a message broker, an application with REST endpoints, an event stream, an Application Performance Monitor (APM), or a Hardware Security Module (HSM). Workload (application) Any process running within a container. Examples include a Spring Boot application, a NodeJS Express application, or a Ruby on Rails application. Binding data Information about a service that you use to configure the behavior of other resources within the cluster. Examples include credentials, connection details, volume mounts, or secrets. Binding connection Any connection that establishes an interaction between the connected components, such as a bindable backing service and an application requiring that backing service. 6.2.2. About Service Binding Operator The Service Binding Operator consists of a controller and an accompanying custom resource definition (CRD) for service binding. It manages the data plane for workloads and backing services. The Service Binding Controller reads the data made available by the control plane of backing services. Then, it projects this data to workloads according to the rules specified through the ServiceBinding resource. As a result, the Service Binding Operator enables workloads to use backing services or external services by automatically collecting and sharing binding data with the workloads. The process involves making the backing service bindable and binding the workload and the service together. 6.2.2.1. Making an Operator-managed backing service bindable To make a service bindable, as an Operator provider, you need to expose the binding data required by workloads to bind with the services provided by the Operator. You can provide the binding data either as annotations or as descriptors in the CRD of the Operator that manages the backing service. 6.2.2.2. Binding a workload together with a backing service By using the Service Binding Operator, as an application developer, you need to declare the intent of establishing a binding connection. You must create a ServiceBinding CR that references the backing service. This action triggers the Service Binding Operator to project the exposed binding data into the workload. The Service Binding Operator receives the declared intent and binds the workload together with the backing service. The CRD of the Service Binding Operator supports the following APIs: Service Binding with the binding.operators.coreos.com API group. Service Binding (Spec API) with the servicebinding.io API group. With Service Binding Operator, you can: Bind your workloads to Operator-managed backing services. Automate configuration of binding data. Provide service operators with a low-touch administrative experience to provision and manage access to services. Enrich the development lifecycle with a consistent and declarative service binding method that eliminates discrepancies in cluster environments. 6.2.3. Key features Exposal of binding data from services Based on annotations present in CRD, custom resources (CRs), or resources. Workload projection Projection of binding data as files, with volume mounts. Projection of binding data as environment variables. Service Binding Options Bind backing services in a namespace that is different from the workload namespace. Project binding data into the specific container workloads. Auto-detection of the binding data from resources owned by the backing service CR. Compose custom binding data from the exposed binding data. Support for non- PodSpec compliant workload resources. Security Support for role-based access control (RBAC). 6.2.4. API differences The CRD of the Service Binding Operator supports the following APIs: Service Binding with the binding.operators.coreos.com API group. Service Binding (Spec API) with the servicebinding.io API group. Both of these API groups have similar features, but they are not completely identical. Here is the complete list of differences between these API groups: Feature Supported by the binding.operators.coreos.com API group Supported by the servicebinding.io API group Notes Binding to provisioned services Yes Yes Not applicable (N/A) Direct secret projection Yes Yes Not applicable (N/A) Bind as files Yes Yes Default behavior for the service bindings of the servicebinding.io API group Opt-in functionality for the service bindings of the binding.operators.coreos.com API group Bind as environment variables Yes Yes Default behavior for the service bindings of the binding.operators.coreos.com API group. Opt-in functionality for the service bindings of the servicebinding.io API group: Environment variables are created alongside files. Selecting workload with a label selector Yes Yes Not applicable (N/A) Detecting binding resources ( .spec.detectBindingResources ) Yes No The servicebinding.io API group has no equivalent feature. Naming strategies Yes No There is no current mechanism within the servicebinding.io API group to interpret the templates that naming strategies use. Container path Yes Partial Because a service binding of the binding.operators.coreos.com API group can specify mapping behavior within the ServiceBinding resource, the servicebinding.io API group cannot fully support an equivalent behavior without more information about the workload. Container name filtering No Yes The binding.operators.coreos.com API group has no equivalent feature. Secret path Yes No The servicebinding.io API group has no equivalent feature. Alternative binding sources (for example, binding data from annotations) Yes Allowed by Service Binding Operator The specification requires support for getting binding data from provisioned services and secrets. However, a strict reading of the specification suggests that support for other binding data sources is allowed. Using this fact, Service Binding Operator can pull the binding data from various sources (for example, pulling binding data from annotations). Service Binding Operator supports these sources on both the API groups. 6.2.5. Additional resources Getting started with service binding 6.3. Installing Service Binding Operator This guide walks cluster administrators through the process of installing the Service Binding Operator to an OpenShift Container Platform cluster. You can install Service Binding Operator on OpenShift Container Platform 4.7 and later. Prerequisites You have access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. Your cluster has the Marketplace capability enabled or the Red Hat Operator catalog source configured manually. 6.3.1. Installing the Service Binding Operator using the web console You can install Service Binding Operator using the OpenShift Container Platform OperatorHub. When you install the Service Binding Operator, the custom resources (CRs) required for the service binding configuration are automatically installed along with the Operator. Procedure In the Administrator perspective of the web console, navigate to Operators OperatorHub . Use the Filter by keyword box to search for Service Binding Operator in the catalog. Click the Service Binding Operator tile. Read the brief description about the Operator on the Service Binding Operator page. Click Install . On the Install Operator page: Select All namespaces on the cluster (default) for the Installation Mode . This mode installs the Operator in the default openshift-operators namespace, which enables the Operator to watch and be made available to all namespaces in the cluster. Select Automatic for the Approval Strategy . This ensures that the future upgrades to the Operator are handled automatically by the Operator Lifecycle Manager (OLM). If you select the Manual approval strategy, OLM creates an update request. As a cluster administrator, you must then manually approve the OLM update request to update the Operator to the new version. Select an Update Channel . By default, the stable channel enables installation of the latest stable and supported release of the Service Binding Operator. Click Install . Note The Operator is installed automatically into the openshift-operators namespace. On the Installed Operator - ready for use pane, click View Operator . You will see the Operator listed on the Installed Operators page. Verify that the Status is set to Succeeded to confirm successful installation of Service Binding Operator. 6.3.2. Additional resources Getting started with service binding . 6.4. Getting started with service binding The Service Binding Operator manages the data plane for workloads and backing services. This guide provides instructions with examples to help you create a database instance, deploy an application, and use the Service Binding Operator to create a binding connection between the application and the database service. Prerequisites You have access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. You have installed the oc CLI. You have installed Service Binding Operator from OperatorHub. You have installed the 5.1.2 version of the Crunchy Postgres for Kubernetes Operator from OperatorHub using the v5 Update channel. The installed Operator is available in an appropriate namespace, such as the my-petclinic namespace. Note You can create the namespace using the oc create namespace my-petclinic command. You have installed the 5.1.2 version of the Crunchy Postgres for Kubernetes Operator from OperatorHub using the v5 Update channel. The installed Operator is available in an appropriate project, such as the my-petclinic project. Note You can create the project using the oc new-project my-petclinic command. 6.4.1. Creating a PostgreSQL database instance To create a PostgreSQL database instance, you must create a PostgresCluster custom resource (CR) and configure the database. Procedure Create the PostgresCluster CR in the my-petclinic namespace by running the following command in shell: USD oc apply -n my-petclinic -f - << EOD --- apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo spec: image: registry.developers.crunchydata.com/crunchydata/crunchy-postgres:ubi8-14.4-0 postgresVersion: 14 instances: - name: instance1 dataVolumeClaimSpec: accessModes: - "ReadWriteOnce" resources: requests: storage: 1Gi backups: pgbackrest: image: registry.developers.crunchydata.com/crunchydata/crunchy-pgbackrest:ubi8-2.38-0 repos: - name: repo1 volume: volumeClaimSpec: accessModes: - "ReadWriteOnce" resources: requests: storage: 1Gi EOD The annotations added in this PostgresCluster CR enable the service binding connection and trigger the Operator reconciliation. The output verifies that the database instance is created: Example output postgrescluster.postgres-operator.crunchydata.com/hippo created After you have created the database instance, ensure that all the pods in the my-petclinic namespace are running: USD oc get pods -n my-petclinic The output, which takes a few minutes to display, verifies that the database is created and configured: Example output NAME READY STATUS RESTARTS AGE hippo-backup-9rxm-88rzq 0/1 Completed 0 2m2s hippo-instance1-6psd-0 4/4 Running 0 3m28s hippo-repo-host-0 2/2 Running 0 3m28s After the database is configured, you can deploy the sample application and connect it to the database service. 6.4.2. Deploying the Spring PetClinic sample application To deploy the Spring PetClinic sample application on an OpenShift Container Platform cluster, you must use a deployment configuration and configure your local environment to be able to test the application. Procedure Deploy the spring-petclinic application with the PostgresCluster custom resource (CR) by running the following command in shell: USD oc apply -n my-petclinic -f - << EOD --- apiVersion: apps/v1 kind: Deployment metadata: name: spring-petclinic labels: app: spring-petclinic spec: replicas: 1 selector: matchLabels: app: spring-petclinic template: metadata: labels: app: spring-petclinic spec: containers: - name: app image: quay.io/service-binding/spring-petclinic:latest imagePullPolicy: Always env: - name: SPRING_PROFILES_ACTIVE value: postgres ports: - name: http containerPort: 8080 --- apiVersion: v1 kind: Service metadata: labels: app: spring-petclinic name: spring-petclinic spec: type: NodePort ports: - port: 80 protocol: TCP targetPort: 8080 selector: app: spring-petclinic EOD The output verifies that the Spring PetClinic sample application is created and deployed: Example output deployment.apps/spring-petclinic created service/spring-petclinic created Note If you are deploying the application using Container images in the Developer perspective of the web console, you must enter the following environment variables under the Deployment section of the Advanced options : Name: SPRING_PROFILES_ACTIVE Value: postgres Verify that the application is not yet connected to the database service by running the following command: USD oc get pods -n my-petclinic The output takes a few minutes to display the CrashLoopBackOff status: Example output NAME READY STATUS RESTARTS AGE spring-petclinic-5b4c7999d4-wzdtz 0/1 CrashLoopBackOff 4 (13s ago) 2m25s At this stage, the pod fails to start. If you try to interact with the application, it returns errors. Expose the service to create a route for your application: USD oc expose service spring-petclinic -n my-petclinic The output verifies that the spring-petclinic service is exposed and a route for the Spring PetClinic sample application is created: Example output route.route.openshift.io/spring-petclinic exposed You can now use the Service Binding Operator to connect the application to the database service. 6.4.3. Connecting the Spring PetClinic sample application to the PostgreSQL database service To connect the sample application to the database service, you must create a ServiceBinding custom resource (CR) that triggers the Service Binding Operator to project the binding data into the application. Procedure Create a ServiceBinding CR to project the binding data: USD oc apply -n my-petclinic -f - << EOD --- apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: spring-petclinic-pgcluster spec: services: 1 - group: postgres-operator.crunchydata.com version: v1beta1 kind: PostgresCluster 2 name: hippo application: 3 name: spring-petclinic group: apps version: v1 resource: deployments EOD 1 Specifies a list of service resources. 2 The CR of the database. 3 The sample application that points to a Deployment or any other similar resource with an embedded PodSpec. The output verifies that the ServiceBinding CR is created to project the binding data into the sample application. Example output servicebinding.binding.operators.coreos.com/spring-petclinic created Verify that the request for service binding is successful: USD oc get servicebindings -n my-petclinic Example output NAME READY REASON AGE spring-petclinic-pgcluster True ApplicationsBound 7s By default, the values from the binding data of the database service are projected as files into the workload container that runs the sample application. For example, all the values from the Secret resource are projected into the bindings/spring-petclinic-pgcluster directory. Note Optionally, you can also verify that the files in the application contain the projected binding data, by printing out the directory contents: USD for i in username password host port type; do oc exec -it deploy/spring-petclinic -n my-petclinic -- /bin/bash -c 'cd /tmp; find /bindings/*/'USDi' -exec echo -n {}:" " \; -exec cat {} \;'; echo; done Example output: With all the values from the secret resource /bindings/spring-petclinic-pgcluster/username: <username> /bindings/spring-petclinic-pgcluster/password: <password> /bindings/spring-petclinic-pgcluster/host: hippo-primary.my-petclinic.svc /bindings/spring-petclinic-pgcluster/port: 5432 /bindings/spring-petclinic-pgcluster/type: postgresql Set up the port forwarding from the application port to access the sample application from your local environment: USD oc port-forward --address 0.0.0.0 svc/spring-petclinic 8080:80 -n my-petclinic Example output Forwarding from 0.0.0.0:8080 -> 8080 Handling connection for 8080 Access http://localhost:8080/petclinic . You can now remotely access the Spring PetClinic sample application at localhost:8080 and see that the application is now connected to the database service. 6.4.4. Additional resources Installing Service Binding Operator . Creating applications using the Developer perspective . Managing resources from custom resource definitions . Known bindable Operators . 6.5. Getting started with service binding on IBM Power, IBM Z, and IBM LinuxONE The Service Binding Operator manages the data plane for workloads and backing services. This guide provides instructions with examples to help you create a database instance, deploy an application, and use the Service Binding Operator to create a binding connection between the application and the database service. Prerequisites You have access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. You have installed the oc CLI. You have installed the Service Binding Operator from OperatorHub. 6.5.1. Deploying a PostgreSQL Operator Procedure To deploy the Dev4Devs PostgreSQL Operator in the my-petclinic namespace run the following command in shell: USD oc apply -f - << EOD --- apiVersion: v1 kind: Namespace metadata: name: my-petclinic --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: postgres-operator-group namespace: my-petclinic --- apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: ibm-multiarch-catalog namespace: openshift-marketplace spec: sourceType: grpc image: quay.io/ibm/operator-registry-<architecture> 1 imagePullPolicy: IfNotPresent displayName: ibm-multiarch-catalog updateStrategy: registryPoll: interval: 30m --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: postgresql-operator-dev4devs-com namespace: openshift-operators spec: channel: alpha installPlanApproval: Automatic name: postgresql-operator-dev4devs-com source: ibm-multiarch-catalog sourceNamespace: openshift-marketplace --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: database-view labels: servicebinding.io/controller: "true" rules: - apiGroups: - postgresql.dev4devs.com resources: - databases verbs: - get - list EOD 1 The Operator image. For IBM Power(R): quay.io/ibm/operator-registry-ppc64le:release-4.9 For IBM Z(R) and IBM(R) LinuxONE: quay.io/ibm/operator-registry-s390x:release-4.8 Verification After the operator is installed, list the operator subscriptions in the openshift-operators namespace: USD oc get subs -n openshift-operators Example output NAME PACKAGE SOURCE CHANNEL postgresql-operator-dev4devs-com postgresql-operator-dev4devs-com ibm-multiarch-catalog alpha rh-service-binding-operator rh-service-binding-operator redhat-operators stable 6.5.2. Creating a PostgreSQL database instance To create a PostgreSQL database instance, you must create a Database custom resource (CR) and configure the database. Procedure Create the Database CR in the my-petclinic namespace by running the following command in shell: USD oc apply -f - << EOD apiVersion: postgresql.dev4devs.com/v1alpha1 kind: Database metadata: name: sampledatabase namespace: my-petclinic annotations: host: sampledatabase type: postgresql port: "5432" service.binding/database: 'path={.spec.databaseName}' service.binding/port: 'path={.metadata.annotations.port}' service.binding/password: 'path={.spec.databasePassword}' service.binding/username: 'path={.spec.databaseUser}' service.binding/type: 'path={.metadata.annotations.type}' service.binding/host: 'path={.metadata.annotations.host}' spec: databaseCpu: 30m databaseCpuLimit: 60m databaseMemoryLimit: 512Mi databaseMemoryRequest: 128Mi databaseName: "sampledb" databaseNameKeyEnvVar: POSTGRESQL_DATABASE databasePassword: "samplepwd" databasePasswordKeyEnvVar: POSTGRESQL_PASSWORD databaseStorageRequest: 1Gi databaseUser: "sampleuser" databaseUserKeyEnvVar: POSTGRESQL_USER image: registry.redhat.io/rhel8/postgresql-13:latest databaseStorageClassName: nfs-storage-provisioner size: 1 EOD The annotations added in this Database CR enable the service binding connection and trigger the Operator reconciliation. The output verifies that the database instance is created: Example output database.postgresql.dev4devs.com/sampledatabase created After you have created the database instance, ensure that all the pods in the my-petclinic namespace are running: USD oc get pods -n my-petclinic The output, which takes a few minutes to display, verifies that the database is created and configured: Example output NAME READY STATUS RESTARTS AGE sampledatabase-cbc655488-74kss 0/1 Running 0 32s After the database is configured, you can deploy the sample application and connect it to the database service. 6.5.3. Deploying the Spring PetClinic sample application To deploy the Spring PetClinic sample application on an OpenShift Container Platform cluster, you must use a deployment configuration and configure your local environment to be able to test the application. Procedure Deploy the spring-petclinic application with the PostgresCluster custom resource (CR) by running the following command in shell: USD oc apply -n my-petclinic -f - << EOD --- apiVersion: apps/v1 kind: Deployment metadata: name: spring-petclinic labels: app: spring-petclinic spec: replicas: 1 selector: matchLabels: app: spring-petclinic template: metadata: labels: app: spring-petclinic spec: containers: - name: app image: quay.io/service-binding/spring-petclinic:latest imagePullPolicy: Always env: - name: SPRING_PROFILES_ACTIVE value: postgres - name: org.springframework.cloud.bindings.boot.enable value: "true" ports: - name: http containerPort: 8080 --- apiVersion: v1 kind: Service metadata: labels: app: spring-petclinic name: spring-petclinic spec: type: NodePort ports: - port: 80 protocol: TCP targetPort: 8080 selector: app: spring-petclinic EOD The output verifies that the Spring PetClinic sample application is created and deployed: Example output deployment.apps/spring-petclinic created service/spring-petclinic created Note If you are deploying the application using Container images in the Developer perspective of the web console, you must enter the following environment variables under the Deployment section of the Advanced options : Name: SPRING_PROFILES_ACTIVE Value: postgres Verify that the application is not yet connected to the database service by running the following command: USD oc get pods -n my-petclinic It takes take a few minutes until the CrashLoopBackOff status is displayed: Example output NAME READY STATUS RESTARTS AGE spring-petclinic-5b4c7999d4-wzdtz 0/1 CrashLoopBackOff 4 (13s ago) 2m25s At this stage, the pod fails to start. If you try to interact with the application, it returns errors. You can now use the Service Binding Operator to connect the application to the database service. 6.5.4. Connecting the Spring PetClinic sample application to the PostgreSQL database service To connect the sample application to the database service, you must create a ServiceBinding custom resource (CR) that triggers the Service Binding Operator to project the binding data into the application. Procedure Create a ServiceBinding CR to project the binding data: USD oc apply -n my-petclinic -f - << EOD --- apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: spring-petclinic-pgcluster spec: services: 1 - group: postgresql.dev4devs.com kind: Database 2 name: sampledatabase version: v1alpha1 application: 3 name: spring-petclinic group: apps version: v1 resource: deployments EOD 1 Specifies a list of service resources. 2 The CR of the database. 3 The sample application that points to a Deployment or any other similar resource with an embedded PodSpec. The output verifies that the ServiceBinding CR is created to project the binding data into the sample application. Example output servicebinding.binding.operators.coreos.com/spring-petclinic created Verify that the request for service binding is successful: USD oc get servicebindings -n my-petclinic Example output NAME READY REASON AGE spring-petclinic-postgresql True ApplicationsBound 47m By default, the values from the binding data of the database service are projected as files into the workload container that runs the sample application. For example, all the values from the Secret resource are projected into the bindings/spring-petclinic-pgcluster directory. Once this is created, you can go to the topology to see the visual connection. Figure 6.1. Connecting spring-petclinic to a sample database Set up the port forwarding from the application port to access the sample application from your local environment: USD oc port-forward --address 0.0.0.0 svc/spring-petclinic 8080:80 -n my-petclinic Example output Forwarding from 0.0.0.0:8080 -> 8080 Handling connection for 8080 Access http://localhost:8080 . You can now remotely access the Spring PetClinic sample application at localhost:8080 and see that the application is now connected to the database service. 6.5.5. Additional resources Installing Service Binding Operator Creating applications using the Developer perspective Managing resources from custom resource definitions 6.6. Exposing binding data from a service Application developers need access to backing services to build and connect workloads. Connecting workloads to backing services is always a challenge because each service provider requires a different way to access their secrets and consume them in a workload. The Service Binding Operator enables application developers to easily bind workloads together with operator-managed backing services, without any manual procedures to configure the binding connection. For the Service Binding Operator to provide the binding data, as an Operator provider or user who creates backing services, you must expose the binding data to be automatically detected by the Service Binding Operator. Then, the Service Binding Operator automatically collects the binding data from the backing service and shares it with a workload to provide a consistent and predictable experience. 6.6.1. Methods of exposing binding data This section describes the methods you can use to expose the binding data. Ensure that you know and understand your workload requirements and environment, and how it works with the provided services. Binding data is exposed under the following circumstances: Backing service is available as a provisioned service resource. The service you intend to connect to is compliant with the Service Binding specification. You must create a Secret resource with all the required binding data values and reference it in the backing service custom resource (CR). The detection of all the binding data values is automatic. Backing service is not available as a provisioned service resource. You must expose the binding data from the backing service. Depending on your workload requirements and environment, you can choose any of the following methods to expose the binding data: Direct secret reference Declaring binding data through custom resource definition (CRD) or CR annotations Detection of binding data through owned resources 6.6.1.1. Provisioned service Provisioned service represents a backing service CR with a reference to a Secret resource placed in the .status.binding.name field of the backing service CR. As an Operator provider or the user who creates backing services, you can use this method to be compliant with the Service Binding specification, by creating a Secret resource and referencing it in the .status.binding.name section of the backing service CR. This Secret resource must provide all the binding data values required for a workload to connect to the backing service. The following examples show an AccountService CR that represents a backing service and a Secret resource referenced from the CR. Example: AccountService CR apiVersion: example.com/v1alpha1 kind: AccountService name: prod-account-service spec: # ... status: binding: name: hippo-pguser-hippo Example: Referenced Secret resource apiVersion: v1 kind: Secret metadata: name: hippo-pguser-hippo data: password: "<password>" user: "<username>" # ... When creating a service binding resource, you can directly give the details of the AccountService resource in the ServiceBinding specification as follows: Example: ServiceBinding resource apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: account-service spec: # ... services: - group: "example.com" version: v1alpha1 kind: AccountService name: prod-account-service application: name: spring-petclinic group: apps version: v1 resource: deployments Example: ServiceBinding resource in Specification API apiVersion: servicebinding.io/v1beta1 kind: ServiceBinding metadata: name: account-service spec: # ... service: apiVersion: example.com/v1alpha1 kind: AccountService name: prod-account-service workload: apiVersion: apps/v1 kind: Deployment name: spring-petclinic This method exposes all the keys in the hippo-pguser-hippo referenced Secret resource as binding data that is to be projected into the workload. 6.6.1.2. Direct secret reference You can use this method, if all the required binding data values are available in a Secret resource that you can reference in your Service Binding definition. In this method, a ServiceBinding resource directly references a Secret resource to connect to a service. All the keys in the Secret resource are exposed as binding data. Example: Specification with the binding.operators.coreos.com API apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: account-service spec: # ... services: - group: "" version: v1 kind: Secret name: hippo-pguser-hippo Example: Specification that is compliant with the servicebinding.io API apiVersion: servicebinding.io/v1beta1 kind: ServiceBinding metadata: name: account-service spec: # ... service: apiVersion: v1 kind: Secret name: hippo-pguser-hippo 6.6.1.3. Declaring binding data through CRD or CR annotations You can use this method to annotate the resources of the backing service to expose the binding data with specific annotations. Adding annotations under the metadata section alters the CRs and CRDs of the backing services. Service Binding Operator detects the annotations added to the CRs and CRDs and then creates a Secret resource with the values extracted based on the annotations. The following examples show the annotations that are added under the metadata section and a referenced ConfigMap object from a resource: Example: Exposing binding data from a Secret object defined in the CR annotations apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: service.binding: 'path={.metadata.name}-pguser-{.metadata.name},objectType=Secret' # ... The example places the name of the secret name in the {.metadata.name}-pguser-{.metadata.name} template that resolves to hippo-pguser-hippo . The template can contain multiple JSONPath expressions. Example: Referenced Secret object from a resource apiVersion: v1 kind: Secret metadata: name: hippo-pguser-hippo data: password: "<password>" user: "<username>" Example: Exposing binding data from a ConfigMap object defined in the CR annotations apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: service.binding: 'path={.metadata.name}-config,objectType=ConfigMap' # ... The example places the name of the config map in the {.metadata.name}-config template that resolves to hippo-config . The template can contain multiple JSONPath expressions. Example: Referenced ConfigMap object from a resource apiVersion: v1 kind: ConfigMap metadata: name: hippo-config data: db_timeout: "10s" user: "hippo" 6.6.1.4. Detection of binding data through owned resources You can use this method if your backing service owns one or more Kubernetes resources such as route, service, config map, or secret that you can use to detect the binding data. In this method, the Service Binding Operator detects the binding data from resources owned by the backing service CR. The following examples show the detectBindingResources API option set to true in the ServiceBinding CR: Example apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: spring-petclinic-detect-all namespace: my-petclinic spec: detectBindingResources: true services: - group: postgres-operator.crunchydata.com version: v1beta1 kind: PostgresCluster name: hippo application: name: spring-petclinic group: apps version: v1 resource: deployments In the example, PostgresCluster custom service resource owns one or more Kubernetes resources such as route, service, config map, or secret. The Service Binding Operator automatically detects the binding data exposed on each of the owned resources. 6.6.2. Data model The data model used in the annotations follows specific conventions. Service binding annotations must use the following convention: service.binding(/<NAME>)?: "<VALUE>|(path=<JSONPATH_TEMPLATE>(,objectType=<OBJECT_TYPE>)?(,elementType=<ELEMENT_TYPE>)?(,sourceKey=<SOURCE_KEY>)?(,sourceValue=<SOURCE_VALUE>)?)" where: <NAME> Specifies the name under which the binding value is to be exposed. You can exclude it only when the objectType parameter is set to Secret or ConfigMap . <VALUE> Specifies the constant value exposed when no path is set. The data model provides the details on the allowed values and semantic for the path , elementType , objectType , sourceKey , and sourceValue parameters. Table 6.4. Parameters and their descriptions Parameter Description Default value path JSONPath template that consists JSONPath expressions enclosed by curly braces {}. N/A elementType Specifies whether the value of the element referenced in the path parameter complies with any one of the following types: string sliceOfStrings sliceOfMaps string objectType Specifies whether the value of the element indicated in the path parameter refers to a ConfigMap , Secret , or plain string in the current namespace. Secret , if elementType is non-string. sourceKey Specifies the key in the ConfigMap or Secret resource to be added to the binding secret when collecting the binding data. Note: When used in conjunction with elementType = sliceOfMaps , the sourceKey parameter specifies the key in the slice of maps whose value is used as a key in the binding secret. Use this optional parameter to expose a specific entry in the referenced Secret or ConfigMap resource as binding data. When not specified, all keys and values from the Secret or ConfigMap resource are exposed and are added to the binding secret. N/A sourceValue Specifies the key in the slice of maps. Note: The value of this key is used as the base to generate the value of the entry for the key-value pair to be added to the binding secret. In addition, the value of the sourceKey is used as the key of the entry for the key-value pair to be added to the binding secret. It is mandatory only if elementType = sliceOfMaps . N/A Note The sourceKey and sourceValue parameters are applicable only if the element indicated in the path parameter refers to a ConfigMap or Secret resource. 6.6.3. Setting annotations mapping to be optional You can have optional fields in the annotations. For example, a path to the credentials might not be present if the service endpoint does not require authentication. In such cases, a field might not exist in the target path of the annotations. As a result, Service Binding Operator generates an error, by default. As a service provider, to indicate whether you require annotations mapping, you can set a value for the optional flag in your annotations when enabling services. Service Binding Operator provides annotations mapping only if the target path is available. When the target path is not available, the Service Binding Operator skips the optional mapping and continues with the projection of the existing mappings without throwing any errors. Procedure To make a field in the annotations optional, set the optional flag value to true : Example apiVersion: apps.example.org/v1beta1 kind: Database metadata: name: my-db namespace: my-petclinic annotations: service.binding/username: path={.spec.name},optional=true # ... Note If you set the optional flag value to false and the Service Binding Operator is unable to find the target path, the Operator fails the annotations mapping. If the optional flag has no value set, the Service Binding Operator considers the value as false by default and fails the annotations mapping. 6.6.4. RBAC requirements To expose the backing service binding data using the Service Binding Operator, you require certain Role-based access control (RBAC) permissions. Specify certain verbs under the rules field of the ClusterRole resource to grant the RBAC permissions for the backing service resources. When you define these rules , you allow the Service Binding Operator to read the binding data of the backing service resources throughout the cluster. If the users do not have permissions to read binding data or modify application resource, the Service Binding Operator prevents such users to bind services to application. Adhering to the RBAC requirements avoids unnecessary permission elevation for the user and prevents access to unauthorized services or applications. The Service Binding Operator performs requests against the Kubernetes API using a dedicated service account. By default, this account has permissions to bind services to workloads, both represented by the following standard Kubernetes or OpenShift objects: Deployments DaemonSets ReplicaSets StatefulSets DeploymentConfigs The Operator service account is bound to an aggregated cluster role, allowing Operator providers or cluster administrators to enable binding custom service resources to workloads. To grant the required permissions within a ClusterRole , label it with the servicebinding.io/controller flag and set the flag value to true . The following example shows how to allow the Service Binding Operator to get , watch , and list the custom resources (CRs) of Crunchy PostgreSQL Operator: Example: Enable binding to PostgreSQL database instances provisioned by Crunchy PostgreSQL Operator apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: postgrescluster-reader labels: servicebinding.io/controller: "true" rules: - apiGroups: - postgres-operator.crunchydata.com resources: - postgresclusters verbs: - get - watch - list ... This cluster role can be deployed during the installation of the backing service Operator. 6.6.5. Categories of exposable binding data The Service Binding Operator enables you to expose the binding data values from the backing service resources and custom resource definitions (CRDs). This section provides examples to show how you can use the various categories of exposable binding data. You must modify these examples to suit your work environment and requirements. 6.6.5.1. Exposing a string from a resource The following example shows how to expose the string from the metadata.name field of the PostgresCluster custom resource (CR) as a username: Example apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: service.binding/username: path={.metadata.name} # ... 6.6.5.2. Exposing a constant value as the binding item The following examples show how to expose a constant value from the PostgresCluster custom resource (CR): Example: Exposing a constant value apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: "service.binding/type": "postgresql" 1 1 Binding type to be exposed with the postgresql value. 6.6.5.3. Exposing an entire config map or secret that is referenced from a resource The following examples show how to expose an entire secret through annotations: Example: Exposing an entire secret through annotations apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: service.binding: 'path={.metadata.name}-pguser-{.metadata.name},objectType=Secret' Example: The referenced secret from the backing service resource apiVersion: v1 kind: Secret metadata: name: hippo-pguser-hippo data: password: "<password>" user: "<username>" 6.6.5.4. Exposing a specific entry from a config map or secret that is referenced from a resource The following examples show how to expose a specific entry from a config map through annotations: Example: Exposing an entry from a config map through annotations apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: service.binding: 'path={.metadata.name}-config,objectType=ConfigMap,sourceKey=user' Example: The referenced config map from the backing service resource The binding data should have a key with name as db_timeout and value as 10s : apiVersion: v1 kind: ConfigMap metadata: name: hippo-config data: db_timeout: "10s" user: "hippo" 6.6.5.5. Exposing a resource definition value The following example shows how to expose a resource definition value through annotations: Example: Exposing a resource definition value through annotations apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: service.binding/username: path={.metadata.name} ... 6.6.5.6. Exposing entries of a collection with the key and value from each entry The following example shows how to expose the entries of a collection with the key and value from each entry through annotations: Example: Exposing the entries of a collection through annotations apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: "service.binding/uri": "path={.status.connections},elementType=sliceOfMaps,sourceKey=type,sourceValue=url" spec: # ... status: connections: - type: primary url: primary.example.com - type: secondary url: secondary.example.com - type: '404' url: black-hole.example.com The following example shows how the entries of a collection in annotations are projected into the bound application. Example: Binding data files /bindings/<binding-name>/uri_primary => primary.example.com /bindings/<binding-name>/uri_secondary => secondary.example.com /bindings/<binding-name>/uri_404 => black-hole.example.com Example: Configuration from a backing service resource status: connections: - type: primary url: primary.example.com - type: secondary url: secondary.example.com - type: '404' url: black-hole.example.com The example helps you to project all those values with keys such as primary , secondary , and so on. 6.6.5.7. Exposing items of a collection with one key per item The following example shows how to expose the items of a collection with one key per item through annotations: Example: Exposing the items of a collection through annotations apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: "service.binding/tags": "path={.spec.tags},elementType=sliceOfStrings" spec: tags: - knowledge - is - power The following example shows how the items of a collection in annotations are projected into the bound application. Example: Binding data files /bindings/<binding-name>/tags_0 => knowledge /bindings/<binding-name>/tags_1 => is /bindings/<binding-name>/tags_2 => power Example: Configuration from a backing service resource spec: tags: - knowledge - is - power 6.6.5.8. Exposing values of collection entries with one key per entry value The following example shows how to expose the values of collection entries with one key per entry value through annotations: Example: Exposing the values of collection entries through annotations apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: "service.binding/url": "path={.spec.connections},elementType=sliceOfStrings,sourceValue=url" spec: connections: - type: primary url: primary.example.com - type: secondary url: secondary.example.com - type: '404' url: black-hole.example.com The following example shows how the values of a collection in annotations are projected into the bound application. Example: Binding data files /bindings/<binding-name>/url_0 => primary.example.com /bindings/<binding-name>/url_1 => secondary.example.com /bindings/<binding-name>/url_2 => black-hole.example.com 6.6.6. Additional resources Defining cluster service versions (CSVs) . Projecting binding data . 6.7. Projecting binding data This section provides information on how you can consume the binding data. 6.7.1. Consumption of binding data After the backing service exposes the binding data, for a workload to access and consume this data, you must project it into the workload from a backing service. Service Binding Operator automatically projects this set of data into the workload in the following methods: By default, as files. As environment variables, after you configure the .spec.bindAsFiles parameter from the ServiceBinding resource. 6.7.2. Configuration of the directory path to project the binding data inside workload container By default, Service Binding Operator mounts the binding data as files at a specific directory in your workload resource. You can configure the directory path using the SERVICE_BINDING_ROOT environment variable setup in the container where your workload runs. Example: Binding data mounted as files 1 Root directory. 2 5 Directory that stores the binding data. 3 Mandatory identifier that identifies the type of the binding data projected into the corresponding directory. 4 Optional: Identifier to identify the provider so that the application can identify the type of backing service it can connect to. To consume the binding data as environment variables, use the built-in language feature of your programming language of choice that can read environment variables. Example: Python client usage Warning For using the binding data directory name to look up the binding data Service Binding Operator uses the ServiceBinding resource name ( .metadata.name ) as the binding data directory name. The spec also provides a way to override that name through the .spec.name field. As a result, there is a chance for binding data name collision if there are multiple ServiceBinding resources in the namespace. However, due to the nature of the volume mount in Kubernetes, the binding data directory will contain values from only one of the Secret resources. 6.7.2.1. Computation of the final path for projecting the binding data as files The following table summarizes the configuration of how the final path for the binding data projection is computed when files are mounted at a specific directory: Table 6.5. Summary of the final path computation SERVICE_BINDING_ROOT Final path Not available /bindings/<ServiceBinding_ResourceName> dir/path/root dir/path/root/<ServiceBinding_ResourceName> In the table, the <ServiceBinding_ResourceName> entry specifies the name of the ServiceBinding resource that you configure in the .metadata.name section of the custom resource (CR). Note By default, the projected files get their permissions set to 0644. Service Binding Operator cannot set specific permissions due to a bug in Kubernetes that causes issues if the service expects specific permissions such as 0600 . As a workaround, you can modify the code of the program or the application that is running inside a workload resource to copy the file to the /tmp directory and set the appropriate permissions. To access and consume the binding data within the existing SERVICE_BINDING_ROOT environment variable, use the built-in language feature of your programming language of choice that can read environment variables. Example: Python client usage In the example, the bindings_list variable contains the binding data for the postgresql database service type. 6.7.3. Projecting the binding data Depending on your workload requirements and environment, you can choose to project the binding data either as files or environment variables. Prerequisites You understand the following concepts: Environment and requirements of your workload, and how it works with the provided services. Consumption of the binding data in your workload resource. Configuration of how the final path for data projection is computed for the default method. The binding data is exposed from the backing service. Procedure To project the binding data as files, determine the destination folder by ensuring that the existing SERVICE_BINDING_ROOT environment variable is present in the container where your workload runs. To project the binding data as environment variables, set the value for the .spec.bindAsFiles parameter to false from the ServiceBinding resource in the custom resource (CR). 6.7.4. Additional resources Exposing binding data from a service . Using the projected binding data in the source code of the application . 6.8. Binding workloads using Service Binding Operator Application developers must bind a workload to one or more backing services by using a binding secret. This secret is generated for the purpose of storing information to be consumed by the workload. As an example, consider that the service you want to connect to is already exposing the binding data. In this case, you would also need a workload to be used along with the ServiceBinding custom resource (CR). By using this ServiceBinding CR, the workload sends a binding request with the details of the services to bind with. Example of ServiceBinding CR apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: spring-petclinic-pgcluster namespace: my-petclinic spec: services: 1 - group: postgres-operator.crunchydata.com version: v1beta1 kind: PostgresCluster name: hippo application: 2 name: spring-petclinic group: apps version: v1 resource: deployments 1 Specifies a list of service resources. 2 The sample application that points to a Deployment or any other similar resource with an embedded PodSpec. As shown in the example, you can also directly use a ConfigMap or a Secret itself as a service resource to be used as a source of binding data. 6.8.1. Naming strategies Naming strategies are available only for the binding.operators.coreos.com API group. Naming strategies use Go templates to help you define custom binding names through the service binding request. Naming strategies apply for all attributes including the mappings in the ServiceBinding custom resource (CR). A backing service projects the binding names as files or environment variables into the workload. If a workload expects the projected binding names in a particular format, but the binding names to be projected from the backing service are not available in that format, then you can change the binding names using naming strategies. Predefined post-processing functions While using naming strategies, depending on the expectations or requirements of your workload, you can use the following predefined post-processing functions in any combination to convert the character strings: upper : Converts the character strings into capital or uppercase letters. lower : Converts the character strings into lowercase letters. title : Converts the character strings where the first letter of each word is capitalized except for certain minor words. Predefined naming strategies Binding names declared through annotations are processed for their name change before their projection into the workload according to the following predefined naming strategies: none : When applied, there are no changes in the binding names. Example After the template compilation, the binding names take the {{ .name }} form. host: hippo-pgbouncer port: 5432 upper : Applied when no namingStrategy is defined. When applied, converts all the character strings of the binding name key into capital or uppercase letters. Example After the template compilation, the binding names take the {{ .service.kind | upper}}_{{ .name | upper }} form. DATABASE_HOST: hippo-pgbouncer DATABASE_PORT: 5432 If your workload requires a different format, you can define a custom naming strategy and change the binding name using a prefix and a separator, for example, PORT_DATABASE . Note When the binding names are projected as files, by default the predefined none naming strategy is applied, and the binding names do not change. When the binding names are projected as environment variables and no namingStrategy is defined, by default the predefined uppercase naming strategy is applied. You can override the predefined naming strategies by defining custom naming strategies using different combinations of custom binding names and predefined post-processing functions. 6.8.2. Advanced binding options You can define the ServiceBinding custom resource (CR) to use the following advanced binding options: Changing binding names: This option is available only for the binding.operators.coreos.com API group. Composing custom binding data: This option is available only for the binding.operators.coreos.com API group. Binding workloads using label selectors: This option is available for both the binding.operators.coreos.com and servicebinding.io API groups. 6.8.2.1. Changing the binding names before projecting them into the workload You can specify the rules to change the binding names in the .spec.namingStrategy attribute of the ServiceBinding CR. For example, consider a Spring PetClinic sample application that connects to the PostgreSQL database. In this case, the PostgreSQL database service exposes the host and port fields of the database to use for binding. The Spring PetClinic sample application can access this exposed binding data through the binding names. Example: Spring PetClinic sample application in the ServiceBinding CR # ... application: name: spring-petclinic group: apps version: v1 resource: deployments # ... Example: PostgreSQL database service in the ServiceBinding CR # ... services: - group: postgres-operator.crunchydata.com version: v1beta1 kind: PostgresCluster name: hippo # ... If namingStrategy is not defined and the binding names are projected as environment variables, then the host: hippo-pgbouncer value in the backing service and the projected environment variable would appear as shown in the following example: Example DATABASE_HOST: hippo-pgbouncer where: DATABASE Specifies the kind backend service. HOST Specifies the binding name. After applying the POSTGRESQL_{{ .service.kind | upper }}_{{ .name | upper }}_ENV naming strategy, the list of custom binding names prepared by the service binding request appears as shown in the following example: Example POSTGRESQL_DATABASE_HOST_ENV: hippo-pgbouncer POSTGRESQL_DATABASE_PORT_ENV: 5432 The following items describe the expressions defined in the POSTGRESQL_{{ .service.kind | upper }}_{{ .name | upper }}_ENV naming strategy: .name : Refers to the binding name exposed by the backing service. In the example, the binding names are HOST and PORT . .service.kind : Refers to the kind of service resource whose binding names are changed with the naming strategy. upper : String function used to post-process the character string while compiling the Go template string. POSTGRESQL : Prefix of the custom binding name. ENV : Suffix of the custom binding name. Similar to the example, you can define the string templates in namingStrategy to define how each key of the binding names should be prepared by the service binding request. 6.8.2.2. Composing custom binding data As an application developer, you can compose custom binding data under the following circumstances: The backing service does not expose binding data. The values exposed are not available in the required format as expected by the workload. For example, consider a case where the backing service CR exposes the host, port, and database user as binding data, but the workload requires that the binding data be consumed as a connection string. You can compose custom binding data using attributes in the Kubernetes resource representing the backing service. Example apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: spring-petclinic-pgcluster namespace: my-petclinic spec: services: - group: postgres-operator.crunchydata.com version: v1beta1 kind: PostgresCluster name: hippo 1 id: postgresDB 2 - group: "" version: v1 kind: Secret name: hippo-pguser-hippo id: postgresSecret application: name: spring-petclinic group: apps version: v1 resource: deployments mappings: ## From the database service - name: JDBC_URL value: 'jdbc:postgresql://{{ .postgresDB.metadata.annotations.proxy }}:{{ .postgresDB.spec.port }}/{{ .postgresDB.metadata.name }}' ## From both the services! - name: CREDENTIALS value: '{{ .postgresDB.metadata.name }}{{ translationService.postgresSecret.data.password }}' ## Generate JSON - name: DB_JSON 3 value: {{ json .postgresDB.status }} 4 1 Name of the backing service resource. 2 Optional identifier. 3 The JSON name that the Service Binding Operator generates. The Service Binding Operator projects this JSON name as the name of a file or environment variable. 4 The JSON value that the Service Binding Operator generates. The Service Binding Operator projects this JSON value as a file or environment variable. The JSON value contains the attributes from your specified field of the backing service custom resource. 6.8.2.3. Binding workloads using a label selector You can use a label selector to specify the workload to bind. If you declare a service binding using the label selectors to pick up workloads, the Service Binding Operator periodically attempts to find and bind new workloads that match the given label selector. For example, as a cluster administrator, you can bind a service to every Deployment in a namespace with the environment: production label by setting an appropriate labelSelector field in the ServiceBinding CR. This enables the Service Binding Operator to bind each of these workloads with one ServiceBinding CR. Example ServiceBinding CR in the binding.operators.coreos.com/v1alpha1 API apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: multi-application-binding namespace: service-binding-demo spec: application: labelSelector: 1 matchLabels: environment: production group: apps version: v1 resource: deployments services: group: "" version: v1 kind: Secret name: super-secret-data 1 Specifies the workload that is being bound. Example ServiceBinding CR in the servicebinding.io API apiVersion: servicebindings.io/v1beta1 kind: ServiceBinding metadata: name: multi-application-binding namespace: service-binding-demo spec: workload: selector: 1 matchLabels: environment: production apiVersion: app/v1 kind: Deployment service: apiVersion: v1 kind: Secret name: super-secret-data 1 Specifies the workload that is being bound. Important If you define the following pairs of fields, Service Binding Operator refuses the binding operation and generates an error: The name and labelSelector fields in the binding.operators.coreos.com/v1alpha1 API. The name and selector fields in the servicebinding.io API (Spec API). Understanding the rebinding behavior Consider a case where, after a successful binding, you use the name field to identify a workload. If you delete and recreate that workload, the ServiceBinding reconciler does not rebind the workload, and the Operator cannot project the binding data to the workload. However, if you use the labelSelector field to identify a workload, the ServiceBinding reconciler rebinds the workload, and the Operator projects the binding data. 6.8.3. Binding secondary workloads that are not compliant with PodSpec A typical scenario in service binding involves configuring the backing service, the workload (Deployment), and Service Binding Operator. Consider a scenario that involves a secondary workload (which can also be an application Operator) that is not compliant with PodSpec and is between the primary workload (Deployment) and Service Binding Operator. For such secondary workload resources, the location of the container path is arbitrary. For service binding, if the secondary workload in a CR is not compliant with the PodSpec, you must specify the location of the container path. Doing so projects the binding data into the container path specified in the secondary workload of the ServiceBinding custom resource (CR), for example, when you do not want the binding data inside a pod. In Service Binding Operator, you can configure the path of where containers or secrets reside within a workload and bind these paths at a custom location. 6.8.3.1. Configuring the custom location of the container path This custom location is available for the binding.operators.coreos.com API group when Service Binding Operator projects the binding data as environment variables. Consider a secondary workload CR, which is not compliant with the PodSpec and has containers located at the spec.containers path: Example: Secondary workload CR apiVersion: "operator.sbo.com/v1" kind: SecondaryWorkload metadata: name: secondary-workload spec: containers: - name: hello-world image: quay.io/baijum/secondary-workload:latest ports: - containerPort: 8080 Procedure Configure the spec.containers path by specifying a value in the ServiceBinding CR and bind this path to a spec.application.bindingPath.containersPath custom location: Example: ServiceBinding CR with the spec.containers path in a custom location apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: spring-petclinic-pgcluster spec: services: - group: postgres-operator.crunchydata.com version: v1beta1 kind: PostgresCluster name: hippo id: postgresDB - group: "" version: v1 kind: Secret name: hippo-pguser-hippo id: postgresSecret application: 1 name: spring-petclinic group: apps version: v1 resource: deployments application: 2 name: secondary-workload group: operator.sbo.com version: v1 resource: secondaryworkloads bindingPath: containersPath: spec.containers 3 1 The sample application that points to a Deployment or any other similar resource with an embedded PodSpec. 2 The secondary workload, which is not compliant with the PodSpec. 3 The custom location of the container path. After you specify the location of the container path, Service Binding Operator generates the binding data, which becomes available in the container path specified in the secondary workload of the ServiceBinding CR. The following example shows the spec.containers path with the envFrom and secretRef fields: Example: Secondary workload CR with the envFrom and secretRef fields apiVersion: "operator.sbo.com/v1" kind: SecondaryWorkload metadata: name: secondary-workload spec: containers: - env: 1 - name: ServiceBindingOperatorChangeTriggerEnvVar value: "31793" envFrom: - secretRef: name: secret-resource-name 2 image: quay.io/baijum/secondary-workload:latest name: hello-world ports: - containerPort: 8080 resources: {} 1 Unique array of containers with values generated by the Service Binding Operator. These values are based on the backing service CR. 2 Name of the Secret resource generated by the Service Binding Operator. 6.8.3.2. Configuring the custom location of the secret path This custom location is available for the binding.operators.coreos.com API group when Service Binding Operator projects the binding data as environment variables. Consider a secondary workload CR, which is not compliant with the PodSpec, with only the secret at the spec.secret path: Example: Secondary workload CR apiVersion: "operator.sbo.com/v1" kind: SecondaryWorkload metadata: name: secondary-workload spec: secret: "" Procedure Configure the spec.secret path by specifying a value in the ServiceBinding CR and bind this path at a spec.application.bindingPath.secretPath custom location: Example: ServiceBinding CR with the spec.secret path in a custom location apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: spring-petclinic-pgcluster spec: ... application: 1 name: secondary-workload group: operator.sbo.com version: v1 resource: secondaryworkloads bindingPath: secretPath: spec.secret 2 ... 1 The secondary workload, which is not compliant with the PodSpec. 2 The custom location of the secret path that contains the name of the Secret resource. After you specify the location of the secret path, Service Binding Operator generates the binding data, which becomes available in the secret path specified in the secondary workload of the ServiceBinding CR. The following example shows the spec.secret path with the binding-request value: Example: Secondary workload CR with the binding-request value ... apiVersion: "operator.sbo.com/v1" kind: SecondaryWorkload metadata: name: secondary-workload spec: secret: binding-request-72ddc0c540ab3a290e138726940591debf14c581 1 ... 1 The unique name of the Secret resource that Service Binding Operator generates. 6.8.3.3. Workload resource mapping Note Workload resource mapping is available for the secondary workloads of the ServiceBinding custom resource (CR) for both the API groups: binding.operators.coreos.com and servicebinding.io . You must define ClusterWorkloadResourceMapping resources only under the servicebinding.io API group. However, the ClusterWorkloadResourceMapping resources interact with ServiceBinding resources under both the binding.operators.coreos.com and servicebinding.io API groups. If you cannot configure custom path locations by using the configuration method for container path, you can define exactly where binding data needs to be projected. Specify where to project the binding data for a given workload kind by defining the ClusterWorkloadResourceMapping resources in the servicebinding.io API group. The following example shows how to define a mapping for the CronJob.batch/v1 resources. Example: Mapping for CronJob.batch/v1 resources apiVersion: servicebinding.io/v1beta1 kind: ClusterWorkloadResourceMapping metadata: name: cronjobs.batch 1 spec: versions: - version: "v1" 2 annotations: .spec.jobTemplate.spec.template.metadata.annotations 3 containers: - path: .spec.jobTemplate.spec.template.spec.containers[*] 4 - path: .spec.jobTemplate.spec.template.spec.initContainers[*] name: .name 5 env: .env 6 volumeMounts: .volumeMounts 7 volumes: .spec.jobTemplate.spec.template.spec.volumes 8 1 Name of the ClusterWorkloadResourceMapping resource, which must be qualified as the plural.group of the mapped workload resource. 2 Version of the resource that is being mapped. Any version that is not specified can be matched with the "*" wildcard. 3 Optional: Identifier of the .annotations field in a pod, specified with a fixed JSONPath. The default value is .spec.template.spec.annotations . 4 Identifier of the .containers and .initContainers fields in a pod, specified with a JSONPath. If no entries under the containers field are defined, the Service Binding Operator defaults to two paths: .spec.template.spec.containers[*] and .spec.template.spec.initContainers[\*] , with all other fields set as their default. However, if you specify an entry, then you must define the .path field. 5 Optional: Identifier of the .name field in a container, specified with a fixed JSONPath. The default value is .name . 6 Optional: Identifier of the .env field in a container, specified with a fixed JSONPath. The default value is .env . 7 Optional: Identifier of the .volumeMounts field in a container, specified with a fixed JSONPath. The default value is .volumeMounts . 8 Optional: Identifier of the .volumes field in a pod, specified with a fixed JSONPath. The default value is .spec.template.spec.volumes . Important In this context, a fixed JSONPath is a subset of the JSONPath grammar that accepts only the following operations: Field lookup: .spec.template Array indexing: .spec['template'] All other operations are not accepted. Most of these fields are optional. When they are not specified, the Service Binding Operator assumes defaults compatible with PodSpec resources. The Service Binding Operator requires that each of these fields is structurally equivalent to the corresponding field in a pod deployment. For example, the contents of the .env field in a workload resource must be able to accept the same structure of data that the .env field in a Pod resource would. Otherwise, projecting binding data into such a workload might result in unexpected behavior from the Service Binding Operator. Behavior specific to the binding.operators.coreos.com API group You can expect the following behaviors when ClusterWorkloadResourceMapping resources interact with ServiceBinding resources under the binding.operators.coreos.com API group: If a ServiceBinding resource with the bindAsFiles: false flag value is created together with one of these mappings, then environment variables are projected into the .envFrom field underneath each path field specified in the corresponding ClusterWorkloadResourceMapping resource. As a cluster administrator, you can specify both a ClusterWorkloadResourceMapping resource and the .spec.application.bindingPath.containersPath field in a ServiceBinding.bindings.coreos.com resource for binding purposes. The Service Binding Operator attempts to project binding data into the locations specified in both a ClusterWorkloadResourceMapping resource and the .spec.application.bindingPath.containersPath field. This behavior is equivalent to adding a container entry to the corresponding ClusterWorkloadResourceMapping resource with the path: USDcontainersPath attribute, with all other values taking their default value. 6.8.4. Unbinding workloads from a backing service You can unbind a workload from a backing service by using the oc tool. To unbind a workload from a backing service, delete the ServiceBinding custom resource (CR) linked to it: USD oc delete ServiceBinding <.metadata.name> Example USD oc delete ServiceBinding spring-petclinic-pgcluster where: spring-petclinic-pgcluster Specifies the name of the ServiceBinding CR. 6.8.5. Additional resources Binding a workload together with a backing service . Connecting the Spring PetClinic sample application to the PostgreSQL database service . Creating custom resources from a file Example schema of the ClusterWorkloadResourceMapping resource . 6.9. Connecting an application to a service using the Developer perspective Use the Topology view for the following purposes: Grouping multiple components within an application. Connecting components with each other. Connecting multiple resources to services with labels. You can either use a binding or a visual connector to connect components. A binding connection between the components can be established only if the target node is an Operator-backed service. This is indicated by the Create a binding connector tool-tip which appears when you drag an arrow to such a target node. When an application is connected to a service by using a binding connector a ServiceBinding resource is created. Then, the Service Binding Operator controller projects the necessary binding data into the application deployment. After the request is successful, the application is redeployed establishing an interaction between the connected components. A visual connector establishes only a visual connection between the components, depicting an intent to connect. No interaction between the components is established. If the target node is not an Operator-backed service the Create a visual connector tool-tip is displayed when you drag an arrow to a target node. 6.9.1. Discovering and identifying Operator-backed bindable services As a user, if you want to create a bindable service, you must know which services are bindable. Bindable services are services that the applications can consume easily because they expose their binding data such as credentials, connection details, volume mounts, secrets, and other binding data in a standard way. The Developer perspective helps you discover and identify such bindable services. Procedure To discover and identify Operator-backed bindable services, consider the following alternative approaches: Click +Add Developer Catalog Operator Backed to see the Operator-backed tiles. Operator-backed services that support service binding features have a Bindable badge on the tiles. On the left pane of the Operator Backed page, select Bindable . Tip Click the help icon to Service binding to see more information about bindable services. Click +Add Add and search for Operator-backed services. When you click the bindable service, you can view the Bindable badge in the side panel. 6.9.2. Creating a visual connection between components You can depict an intent to connect application components by using the visual connector. This procedure walks you through an example of creating a visual connection between a PostgreSQL Database service and a Spring PetClinic sample application. Prerequisites You have created and deployed a Spring PetClinic sample application by using the Developer perspective. You have created and deployed a Crunchy PostgreSQL database instance by using the Developer perspective. This instance has the following components: hippo-backup , hippo-instance , hippo-repo-host , and hippo-pgbouncer . Procedure In the Developer perspective, switch to the relevant project, for example, my-petclinic . Hover over the Spring PetClinic sample application to see a dangling arrow on the node. Figure 6.2. Visual connector Click and drag the arrow towards the hippo-pgbouncer deployment to connect the Spring PetClinic sample application with it. Click the spring-petclinic deployment to see the Overview panel. Under the Details tab, click the edit icon in the Annotations section to see the Key = app.openshift.io/connects-to and Value = [{"apiVersion":"apps/v1","kind":"Deployment","name":"hippo-pgbouncer"}] annotation added to the deployment. Optional: You can repeat these steps to establish visual connections between other applications and components you create. Figure 6.3. Connecting multiple applications 6.9.3. Creating a binding connection between components You can create a binding connection with Operator-backed components, as demonstrated in the following example, which uses a PostgreSQL Database service and a Spring PetClinic sample application. To create a binding connection with a service that the PostgreSQL Database Operator backs, you must first add the Red Hat-provided PostgreSQL Database Operator to the OperatorHub , and then install the Operator. The PostreSQL Database Operator then creates and manages the Database resource, which exposes the binding data in secrets, config maps, status, and spec attributes. Prerequisites You created and deployed a Spring PetClinic sample application in the Developer perspective. You installed Service Binding Operator from the OperatorHub . You installed the Crunchy Postgres for Kubernetes Operator from the OperatorHub in the v5 Update channel. You created a PostgresCluster resource in the Developer perspective, which resulted in a Crunchy PostgreSQL database instance with the following components: hippo-backup , hippo-instance , hippo-repo-host , and hippo-pgbouncer . Procedure In the Developer perspective, switch to the relevant project, for example, my-petclinic . In the Topology view, hover over the Spring PetClinic sample application to see a dangling arrow on the node. Drag and drop the arrow onto the hippo database icon in the Postgres Cluster to make a binding connection with the Spring PetClinic sample application. In the Create Service Binding dialog, keep the default name or add a different name for the service binding, and then click Create . Figure 6.4. Service Binding dialog Optional: If there is difficulty in making a binding connection using the Topology view, go to +Add YAML Import YAML . Optional: In the YAML editor, add the ServiceBinding resource: apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: spring-petclinic-pgcluster namespace: my-petclinic spec: services: - group: postgres-operator.crunchydata.com version: v1beta1 kind: PostgresCluster name: hippo application: name: spring-petclinic group: apps version: v1 resource: deployments A service binding request is created and a binding connection is created through a ServiceBinding resource. When the database service connection request succeeds, the application is redeployed and the connection is established. Figure 6.5. Binding connector Tip You can also use the context menu by dragging the dangling arrow to add and create a binding connection to an operator-backed service. Figure 6.6. Context menu to create binding connection In the navigation menu, click Topology . The spring-petclinic deployment in the Topology view includes an Open URL link to view its web page. Click the Open URL link. You can now view the Spring PetClinic sample application remotely to confirm that the application is now connected to the database service and that the data has been successfully projected to the application from the Crunchy PostgreSQL database service. The Service Binding Operator has successfully created a working connection between the application and the database service. 6.9.4. Verifying the status of your service binding from the Topology view The Developer perspective helps you verify the status of your service binding through the Topology view. Procedure If a service binding was successful, click the binding connector. A side panel appears displaying the Connected status under the Details tab. Optionally, you can view the Connected status on the following pages from the Developer perspective: The ServiceBindings page. The ServiceBinding details page. In addition, the page title displays a Connected badge. If a service binding was unsuccessful, the binding connector shows a red arrowhead and a red cross in the middle of the connection. Click this connector to view the Error status in the side panel under the Details tab. Optionally, click the Error status to view specific information about the underlying problem. You can also view the Error status and a tooltip on the following pages from the Developer perspective: The ServiceBindings page. The ServiceBinding details page. In addition, the page title displays an Error badge. Tip In the ServiceBindings page, use the Filter dropdown to list the service bindings based on their status. 6.9.5. Visualizing the binding connections to resources As a user, use Label Selector in the Topology view to visualize a service binding and simplify the process of binding applications to backing services. When creating ServiceBinding resources, specify labels by using Label Selector to find and connect applications instead of using the name of the application. The Service Binding Operator then consumes these ServiceBinding resources and specified labels to find the applications to create a service binding with. Tip To navigate to a list of all connected resources, click the label selector associated with the ServiceBinding resource. To view the Label Selector , consider the following approaches: After you import a ServiceBinding resource, view the Label Selector associated with the service binding on the ServiceBinding details page. Figure 6.7. ServiceBinding details page Note To use Label Selector and to create one or more connections at once, you must import the YAML file of the ServiceBinding resource. After the connection is established and when you click the binding connector, the service binding connector Details side panel appears. You can view the Label Selector associated with the service binding on this panel. Figure 6.8. Topology label selector side panel Note When you delete a binding connector (a single connection within Topology along with a service binding), the action removes all connections that are tied to the deleted service binding. While deleting a binding connector, a confirmation dialog appears, which informs that all connectors will be deleted. Figure 6.9. Delete ServiceBinding confirmation dialog 6.9.6. Additional resources Getting started with service binding Known bindable Operators
[ "`postgresclusters.postgres-operator.crunchydata.com \"hippo\" is forbidden: User \"system:serviceaccount:my-petclinic:service-binding-operator\" cannot get resource \"postgresclusters\" in API group \"postgres-operator.crunchydata.com\" in the namespace \"my-petclinic\"`", "kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: service-binding-crunchy-postgres-viewer subjects: - kind: ServiceAccount name: service-binding-operator roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: service-binding-crunchy-postgres-viewer-role", "`postgresclusters.postgres-operator.crunchydata.com \"hippo\" is forbidden: User \"system:serviceaccount:my-petclinic:service-binding-operator\" cannot get resource \"postgresclusters\" in API group \"postgres-operator.crunchydata.com\" in the namespace \"my-petclinic\"`", "kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: service-binding-crunchy-postgres-viewer subjects: - kind: ServiceAccount name: service-binding-operator roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: service-binding-crunchy-postgres-viewer-role", "`postgresclusters.postgres-operator.crunchydata.com \"hippo\" is forbidden: User \"system:serviceaccount:my-petclinic:service-binding-operator\" cannot get resource \"postgresclusters\" in API group \"postgres-operator.crunchydata.com\" in the namespace \"my-petclinic\"`", "kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: service-binding-crunchy-postgres-viewer subjects: - kind: ServiceAccount name: service-binding-operator roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: service-binding-crunchy-postgres-viewer-role", "`postgresclusters.postgres-operator.crunchydata.com \"hippo\" is forbidden: User \"system:serviceaccount:my-petclinic:service-binding-operator\" cannot get resource \"postgresclusters\" in API group \"postgres-operator.crunchydata.com\" in the namespace \"my-petclinic\"`", "kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: service-binding-crunchy-postgres-viewer subjects: - kind: ServiceAccount name: service-binding-operator roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: service-binding-crunchy-postgres-viewer-role", "oc apply -n my-petclinic -f - << EOD --- apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo spec: image: registry.developers.crunchydata.com/crunchydata/crunchy-postgres:ubi8-14.4-0 postgresVersion: 14 instances: - name: instance1 dataVolumeClaimSpec: accessModes: - \"ReadWriteOnce\" resources: requests: storage: 1Gi backups: pgbackrest: image: registry.developers.crunchydata.com/crunchydata/crunchy-pgbackrest:ubi8-2.38-0 repos: - name: repo1 volume: volumeClaimSpec: accessModes: - \"ReadWriteOnce\" resources: requests: storage: 1Gi EOD", "postgrescluster.postgres-operator.crunchydata.com/hippo created", "oc get pods -n my-petclinic", "NAME READY STATUS RESTARTS AGE hippo-backup-9rxm-88rzq 0/1 Completed 0 2m2s hippo-instance1-6psd-0 4/4 Running 0 3m28s hippo-repo-host-0 2/2 Running 0 3m28s", "oc apply -n my-petclinic -f - << EOD --- apiVersion: apps/v1 kind: Deployment metadata: name: spring-petclinic labels: app: spring-petclinic spec: replicas: 1 selector: matchLabels: app: spring-petclinic template: metadata: labels: app: spring-petclinic spec: containers: - name: app image: quay.io/service-binding/spring-petclinic:latest imagePullPolicy: Always env: - name: SPRING_PROFILES_ACTIVE value: postgres ports: - name: http containerPort: 8080 --- apiVersion: v1 kind: Service metadata: labels: app: spring-petclinic name: spring-petclinic spec: type: NodePort ports: - port: 80 protocol: TCP targetPort: 8080 selector: app: spring-petclinic EOD", "deployment.apps/spring-petclinic created service/spring-petclinic created", "oc get pods -n my-petclinic", "NAME READY STATUS RESTARTS AGE spring-petclinic-5b4c7999d4-wzdtz 0/1 CrashLoopBackOff 4 (13s ago) 2m25s", "oc expose service spring-petclinic -n my-petclinic", "route.route.openshift.io/spring-petclinic exposed", "oc apply -n my-petclinic -f - << EOD --- apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: spring-petclinic-pgcluster spec: services: 1 - group: postgres-operator.crunchydata.com version: v1beta1 kind: PostgresCluster 2 name: hippo application: 3 name: spring-petclinic group: apps version: v1 resource: deployments EOD", "servicebinding.binding.operators.coreos.com/spring-petclinic created", "oc get servicebindings -n my-petclinic", "NAME READY REASON AGE spring-petclinic-pgcluster True ApplicationsBound 7s", "for i in username password host port type; do oc exec -it deploy/spring-petclinic -n my-petclinic -- /bin/bash -c 'cd /tmp; find /bindings/*/'USDi' -exec echo -n {}:\" \" \\; -exec cat {} \\;'; echo; done", "/bindings/spring-petclinic-pgcluster/username: <username> /bindings/spring-petclinic-pgcluster/password: <password> /bindings/spring-petclinic-pgcluster/host: hippo-primary.my-petclinic.svc /bindings/spring-petclinic-pgcluster/port: 5432 /bindings/spring-petclinic-pgcluster/type: postgresql", "oc port-forward --address 0.0.0.0 svc/spring-petclinic 8080:80 -n my-petclinic", "Forwarding from 0.0.0.0:8080 -> 8080 Handling connection for 8080", "oc apply -f - << EOD --- apiVersion: v1 kind: Namespace metadata: name: my-petclinic --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: postgres-operator-group namespace: my-petclinic --- apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: ibm-multiarch-catalog namespace: openshift-marketplace spec: sourceType: grpc image: quay.io/ibm/operator-registry-<architecture> 1 imagePullPolicy: IfNotPresent displayName: ibm-multiarch-catalog updateStrategy: registryPoll: interval: 30m --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: postgresql-operator-dev4devs-com namespace: openshift-operators spec: channel: alpha installPlanApproval: Automatic name: postgresql-operator-dev4devs-com source: ibm-multiarch-catalog sourceNamespace: openshift-marketplace --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: database-view labels: servicebinding.io/controller: \"true\" rules: - apiGroups: - postgresql.dev4devs.com resources: - databases verbs: - get - list EOD", "oc get subs -n openshift-operators", "NAME PACKAGE SOURCE CHANNEL postgresql-operator-dev4devs-com postgresql-operator-dev4devs-com ibm-multiarch-catalog alpha rh-service-binding-operator rh-service-binding-operator redhat-operators stable", "oc apply -f - << EOD apiVersion: postgresql.dev4devs.com/v1alpha1 kind: Database metadata: name: sampledatabase namespace: my-petclinic annotations: host: sampledatabase type: postgresql port: \"5432\" service.binding/database: 'path={.spec.databaseName}' service.binding/port: 'path={.metadata.annotations.port}' service.binding/password: 'path={.spec.databasePassword}' service.binding/username: 'path={.spec.databaseUser}' service.binding/type: 'path={.metadata.annotations.type}' service.binding/host: 'path={.metadata.annotations.host}' spec: databaseCpu: 30m databaseCpuLimit: 60m databaseMemoryLimit: 512Mi databaseMemoryRequest: 128Mi databaseName: \"sampledb\" databaseNameKeyEnvVar: POSTGRESQL_DATABASE databasePassword: \"samplepwd\" databasePasswordKeyEnvVar: POSTGRESQL_PASSWORD databaseStorageRequest: 1Gi databaseUser: \"sampleuser\" databaseUserKeyEnvVar: POSTGRESQL_USER image: registry.redhat.io/rhel8/postgresql-13:latest databaseStorageClassName: nfs-storage-provisioner size: 1 EOD", "database.postgresql.dev4devs.com/sampledatabase created", "oc get pods -n my-petclinic", "NAME READY STATUS RESTARTS AGE sampledatabase-cbc655488-74kss 0/1 Running 0 32s", "oc apply -n my-petclinic -f - << EOD --- apiVersion: apps/v1 kind: Deployment metadata: name: spring-petclinic labels: app: spring-petclinic spec: replicas: 1 selector: matchLabels: app: spring-petclinic template: metadata: labels: app: spring-petclinic spec: containers: - name: app image: quay.io/service-binding/spring-petclinic:latest imagePullPolicy: Always env: - name: SPRING_PROFILES_ACTIVE value: postgres - name: org.springframework.cloud.bindings.boot.enable value: \"true\" ports: - name: http containerPort: 8080 --- apiVersion: v1 kind: Service metadata: labels: app: spring-petclinic name: spring-petclinic spec: type: NodePort ports: - port: 80 protocol: TCP targetPort: 8080 selector: app: spring-petclinic EOD", "deployment.apps/spring-petclinic created service/spring-petclinic created", "oc get pods -n my-petclinic", "NAME READY STATUS RESTARTS AGE spring-petclinic-5b4c7999d4-wzdtz 0/1 CrashLoopBackOff 4 (13s ago) 2m25s", "oc apply -n my-petclinic -f - << EOD --- apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: spring-petclinic-pgcluster spec: services: 1 - group: postgresql.dev4devs.com kind: Database 2 name: sampledatabase version: v1alpha1 application: 3 name: spring-petclinic group: apps version: v1 resource: deployments EOD", "servicebinding.binding.operators.coreos.com/spring-petclinic created", "oc get servicebindings -n my-petclinic", "NAME READY REASON AGE spring-petclinic-postgresql True ApplicationsBound 47m", "oc port-forward --address 0.0.0.0 svc/spring-petclinic 8080:80 -n my-petclinic", "Forwarding from 0.0.0.0:8080 -> 8080 Handling connection for 8080", "apiVersion: example.com/v1alpha1 kind: AccountService name: prod-account-service spec: status: binding: name: hippo-pguser-hippo", "apiVersion: v1 kind: Secret metadata: name: hippo-pguser-hippo data: password: \"<password>\" user: \"<username>\"", "apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: account-service spec: services: - group: \"example.com\" version: v1alpha1 kind: AccountService name: prod-account-service application: name: spring-petclinic group: apps version: v1 resource: deployments", "apiVersion: servicebinding.io/v1beta1 kind: ServiceBinding metadata: name: account-service spec: service: apiVersion: example.com/v1alpha1 kind: AccountService name: prod-account-service workload: apiVersion: apps/v1 kind: Deployment name: spring-petclinic", "apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: account-service spec: services: - group: \"\" version: v1 kind: Secret name: hippo-pguser-hippo", "apiVersion: servicebinding.io/v1beta1 kind: ServiceBinding metadata: name: account-service spec: service: apiVersion: v1 kind: Secret name: hippo-pguser-hippo", "apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: service.binding: 'path={.metadata.name}-pguser-{.metadata.name},objectType=Secret'", "apiVersion: v1 kind: Secret metadata: name: hippo-pguser-hippo data: password: \"<password>\" user: \"<username>\"", "apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: service.binding: 'path={.metadata.name}-config,objectType=ConfigMap'", "apiVersion: v1 kind: ConfigMap metadata: name: hippo-config data: db_timeout: \"10s\" user: \"hippo\"", "apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: spring-petclinic-detect-all namespace: my-petclinic spec: detectBindingResources: true services: - group: postgres-operator.crunchydata.com version: v1beta1 kind: PostgresCluster name: hippo application: name: spring-petclinic group: apps version: v1 resource: deployments", "service.binding(/<NAME>)?: \"<VALUE>|(path=<JSONPATH_TEMPLATE>(,objectType=<OBJECT_TYPE>)?(,elementType=<ELEMENT_TYPE>)?(,sourceKey=<SOURCE_KEY>)?(,sourceValue=<SOURCE_VALUE>)?)\"", "apiVersion: apps.example.org/v1beta1 kind: Database metadata: name: my-db namespace: my-petclinic annotations: service.binding/username: path={.spec.name},optional=true", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: postgrescluster-reader labels: servicebinding.io/controller: \"true\" rules: - apiGroups: - postgres-operator.crunchydata.com resources: - postgresclusters verbs: - get - watch - list", "apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: service.binding/username: path={.metadata.name}", "apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: \"service.binding/type\": \"postgresql\" 1", "apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: service.binding: 'path={.metadata.name}-pguser-{.metadata.name},objectType=Secret'", "apiVersion: v1 kind: Secret metadata: name: hippo-pguser-hippo data: password: \"<password>\" user: \"<username>\"", "apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: service.binding: 'path={.metadata.name}-config,objectType=ConfigMap,sourceKey=user'", "apiVersion: v1 kind: ConfigMap metadata: name: hippo-config data: db_timeout: \"10s\" user: \"hippo\"", "apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: service.binding/username: path={.metadata.name}", "apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: \"service.binding/uri\": \"path={.status.connections},elementType=sliceOfMaps,sourceKey=type,sourceValue=url\" spec: status: connections: - type: primary url: primary.example.com - type: secondary url: secondary.example.com - type: '404' url: black-hole.example.com", "/bindings/<binding-name>/uri_primary => primary.example.com /bindings/<binding-name>/uri_secondary => secondary.example.com /bindings/<binding-name>/uri_404 => black-hole.example.com", "status: connections: - type: primary url: primary.example.com - type: secondary url: secondary.example.com - type: '404' url: black-hole.example.com", "apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: \"service.binding/tags\": \"path={.spec.tags},elementType=sliceOfStrings\" spec: tags: - knowledge - is - power", "/bindings/<binding-name>/tags_0 => knowledge /bindings/<binding-name>/tags_1 => is /bindings/<binding-name>/tags_2 => power", "spec: tags: - knowledge - is - power", "apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: \"service.binding/url\": \"path={.spec.connections},elementType=sliceOfStrings,sourceValue=url\" spec: connections: - type: primary url: primary.example.com - type: secondary url: secondary.example.com - type: '404' url: black-hole.example.com", "/bindings/<binding-name>/url_0 => primary.example.com /bindings/<binding-name>/url_1 => secondary.example.com /bindings/<binding-name>/url_2 => black-hole.example.com", "USDSERVICE_BINDING_ROOT 1 ├── account-database 2 │ ├── type 3 │ ├── provider 4 │ ├── uri │ ├── username │ └── password └── transaction-event-stream 5 ├── type ├── connection-count ├── uri ├── certificates └── private-key", "import os username = os.getenv(\"USERNAME\") password = os.getenv(\"PASSWORD\")", "from pyservicebinding import binding try: sb = binding.ServiceBinding() except binding.ServiceBindingRootMissingError as msg: # log the error message and retry/exit print(\"SERVICE_BINDING_ROOT env var not set\") sb = binding.ServiceBinding() bindings_list = sb.bindings(\"postgresql\")", "apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: spring-petclinic-pgcluster namespace: my-petclinic spec: services: 1 - group: postgres-operator.crunchydata.com version: v1beta1 kind: PostgresCluster name: hippo application: 2 name: spring-petclinic group: apps version: v1 resource: deployments", "host: hippo-pgbouncer port: 5432", "DATABASE_HOST: hippo-pgbouncer DATABASE_PORT: 5432", "application: name: spring-petclinic group: apps version: v1 resource: deployments", "services: - group: postgres-operator.crunchydata.com version: v1beta1 kind: PostgresCluster name: hippo", "DATABASE_HOST: hippo-pgbouncer", "POSTGRESQL_DATABASE_HOST_ENV: hippo-pgbouncer POSTGRESQL_DATABASE_PORT_ENV: 5432", "apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: spring-petclinic-pgcluster namespace: my-petclinic spec: services: - group: postgres-operator.crunchydata.com version: v1beta1 kind: PostgresCluster name: hippo 1 id: postgresDB 2 - group: \"\" version: v1 kind: Secret name: hippo-pguser-hippo id: postgresSecret application: name: spring-petclinic group: apps version: v1 resource: deployments mappings: ## From the database service - name: JDBC_URL value: 'jdbc:postgresql://{{ .postgresDB.metadata.annotations.proxy }}:{{ .postgresDB.spec.port }}/{{ .postgresDB.metadata.name }}' ## From both the services! - name: CREDENTIALS value: '{{ .postgresDB.metadata.name }}{{ translationService.postgresSecret.data.password }}' ## Generate JSON - name: DB_JSON 3 value: {{ json .postgresDB.status }} 4", "apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: multi-application-binding namespace: service-binding-demo spec: application: labelSelector: 1 matchLabels: environment: production group: apps version: v1 resource: deployments services: group: \"\" version: v1 kind: Secret name: super-secret-data", "apiVersion: servicebindings.io/v1beta1 kind: ServiceBinding metadata: name: multi-application-binding namespace: service-binding-demo spec: workload: selector: 1 matchLabels: environment: production apiVersion: app/v1 kind: Deployment service: apiVersion: v1 kind: Secret name: super-secret-data", "apiVersion: \"operator.sbo.com/v1\" kind: SecondaryWorkload metadata: name: secondary-workload spec: containers: - name: hello-world image: quay.io/baijum/secondary-workload:latest ports: - containerPort: 8080", "apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: spring-petclinic-pgcluster spec: services: - group: postgres-operator.crunchydata.com version: v1beta1 kind: PostgresCluster name: hippo id: postgresDB - group: \"\" version: v1 kind: Secret name: hippo-pguser-hippo id: postgresSecret application: 1 name: spring-petclinic group: apps version: v1 resource: deployments application: 2 name: secondary-workload group: operator.sbo.com version: v1 resource: secondaryworkloads bindingPath: containersPath: spec.containers 3", "apiVersion: \"operator.sbo.com/v1\" kind: SecondaryWorkload metadata: name: secondary-workload spec: containers: - env: 1 - name: ServiceBindingOperatorChangeTriggerEnvVar value: \"31793\" envFrom: - secretRef: name: secret-resource-name 2 image: quay.io/baijum/secondary-workload:latest name: hello-world ports: - containerPort: 8080 resources: {}", "apiVersion: \"operator.sbo.com/v1\" kind: SecondaryWorkload metadata: name: secondary-workload spec: secret: \"\"", "apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: spring-petclinic-pgcluster spec: application: 1 name: secondary-workload group: operator.sbo.com version: v1 resource: secondaryworkloads bindingPath: secretPath: spec.secret 2", "apiVersion: \"operator.sbo.com/v1\" kind: SecondaryWorkload metadata: name: secondary-workload spec: secret: binding-request-72ddc0c540ab3a290e138726940591debf14c581 1", "apiVersion: servicebinding.io/v1beta1 kind: ClusterWorkloadResourceMapping metadata: name: cronjobs.batch 1 spec: versions: - version: \"v1\" 2 annotations: .spec.jobTemplate.spec.template.metadata.annotations 3 containers: - path: .spec.jobTemplate.spec.template.spec.containers[*] 4 - path: .spec.jobTemplate.spec.template.spec.initContainers[*] name: .name 5 env: .env 6 volumeMounts: .volumeMounts 7 volumes: .spec.jobTemplate.spec.template.spec.volumes 8", "oc delete ServiceBinding <.metadata.name>", "oc delete ServiceBinding spring-petclinic-pgcluster", "apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: spring-petclinic-pgcluster namespace: my-petclinic spec: services: - group: postgres-operator.crunchydata.com version: v1beta1 kind: PostgresCluster name: hippo application: name: spring-petclinic group: apps version: v1 resource: deployments" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/building_applications/connecting-applications-to-services
Chapter 8. opm CLI
Chapter 8. opm CLI 8.1. Installing the opm CLI 8.1.1. About the opm CLI The opm CLI tool is provided by the Operator Framework for use with the Operator bundle format. This tool allows you to create and maintain catalogs of Operators from a list of Operator bundles that are similar to software repositories. The result is a container image which can be stored in a container registry and then installed on a cluster. A catalog contains a database of pointers to Operator manifest content that can be queried through an included API that is served when the container image is run. On OpenShift Container Platform, Operator Lifecycle Manager (OLM) can reference the image in a catalog source, defined by a CatalogSource object, which polls the image at regular intervals to enable frequent updates to installed Operators on the cluster. Additional resources See Operator Framework packaging format for more information about the bundle format. To create a bundle image using the Operator SDK, see Working with bundle images . 8.1.2. Installing the opm CLI You can install the opm CLI tool on your Linux, macOS, or Windows workstation. Prerequisites For Linux, you must provide the following packages. RHEL 8 meets these requirements: podman version 1.9.3+ (version 2.0+ recommended) glibc version 2.28+ Procedure Navigate to the OpenShift mirror site and download the latest version of the tarball that matches your operating system. Unpack the archive. For Linux or macOS: USD tar xvf <file> For Windows, unzip the archive with a ZIP program. Place the file anywhere in your PATH . For Linux or macOS: Check your PATH : USD echo USDPATH Move the file. For example: USD sudo mv ./opm /usr/local/bin/ For Windows: Check your PATH : C:\> path Move the file: C:\> move opm.exe <directory> Verification After you install the opm CLI, verify that it is available: USD opm version 8.1.3. Additional resources See Managing custom catalogs for opm procedures including creating, updating, and pruning catalogs. 8.2. opm CLI reference The opm command-line interface (CLI) is a tool for creating and maintaining Operator catalogs. opm CLI syntax USD opm <command> [<subcommand>] [<argument>] [<flags>] Warning The opm CLI is not forward compatible. The version of the opm CLI used to generate catalog content must be earlier than or equal to the version used to serve the content on a cluster. Table 8.1. Global flags Flag Description -skip-tls-verify Skip TLS certificate verification for container image registries while pulling bundles or indexes. --use-http When you pull bundles, use plain HTTP for container image registries. Important The SQLite-based catalog format, including the related CLI commands, is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. 8.2.1. generate Generate various artifacts for declarative config indexes. Command syntax USD opm generate <subcommand> [<flags>] Table 8.2. generate subcommands Subcommand Description dockerfile Generate a Dockerfile for a declarative config index. Table 8.3. generate flags Flags Description -h , --help Help for generate. 8.2.1.1. dockerfile Generate a Dockerfile for a declarative config index. Important This command creates a Dockerfile in the same directory as the <dcRootDir> (named <dcDirName>.Dockerfile ) that is used to build the index. If a Dockerfile with the same name already exists, this command fails. When specifying extra labels, if duplicate keys exist, only the last value of each duplicate key gets added to the generated Dockerfile. Command syntax USD opm generate dockerfile <dcRootDir> [<flags>] Table 8.4. generate dockerfile flags Flag Description -i, --binary-image (string) Image in which to build catalog. The default value is quay.io/operator-framework/opm:latest . -l , --extra-labels (string) Extra labels to include in the generated Dockerfile. Labels have the form key=value . -h , --help Help for Dockerfile. Note To build with the official Red Hat image, use the registry.redhat.io/openshift4/ose-operator-registry-rhel9:v4.17 value with the -i flag. 8.2.2. index Generate Operator index for SQLite database format container images from pre-existing Operator bundles. Important As of OpenShift Container Platform 4.11, the default Red Hat-provided Operator catalog releases in the file-based catalog format. The default Red Hat-provided Operator catalogs for OpenShift Container Platform 4.6 through 4.10 released in the deprecated SQLite database format. The opm subcommands, flags, and functionality related to the SQLite database format are also deprecated and will be removed in a future release. The features are still supported and must be used for catalogs that use the deprecated SQLite database format. Many of the opm subcommands and flags for working with the SQLite database format, such as opm index prune , do not work with the file-based catalog format. For more information about working with file-based catalogs, see "Additional resources". Command syntax USD opm index <subcommand> [<flags>] Table 8.5. index subcommands Subcommand Description add Add Operator bundles to an index. prune Prune an index of all but specified packages. prune-stranded Prune an index of stranded bundles, which are bundles that are not associated with a particular image. rm Delete an entire Operator from an index. 8.2.2.1. add Add Operator bundles to an index. Command syntax USD opm index add [<flags>] Table 8.6. index add flags Flag Description -i , --binary-image Container image for on-image opm command -u , --build-tool (string) Tool to build container images: podman (the default value) or docker . Overrides part of the --container-tool flag. -b , --bundles (strings) Comma-separated list of bundles to add. -c , --container-tool (string) Tool to interact with container images, such as for saving and building: docker or podman . -f , --from-index (string) index to add to. --generate If enabled, only creates the Dockerfile and saves it to local disk. --mode (string) Graph update mode that defines how channel graphs are updated: replaces (the default value), semver , or semver-skippatch . -d , --out-dockerfile (string) Optional: If generating the Dockerfile, specify a file name. --permissive Allow registry load errors. -p , --pull-tool (string) Tool to pull container images: none (the default value), docker , or podman . Overrides part of the --container-tool flag. -t , --tag (string) Custom tag for container image being built. 8.2.2.2. prune Prune an index of all but specified packages. Command syntax USD opm index prune [<flags>] Table 8.7. index prune flags Flag Description -i , --binary-image Container image for on-image opm command -c , --container-tool (string) Tool to interact with container images, such as for saving and building: docker or podman . -f , --from-index (string) Index to prune. --generate If enabled, only creates the Dockerfile and saves it to local disk. -d , --out-dockerfile (string) Optional: If generating the Dockerfile, specify a file name. -p , --packages (strings) Comma-separated list of packages to keep. --permissive Allow registry load errors. -t , --tag (string) Custom tag for container image being built. 8.2.2.3. prune-stranded Prune an index of stranded bundles, which are bundles that are not associated with a particular image. Command syntax USD opm index prune-stranded [<flags>] Table 8.8. index prune-stranded flags Flag Description -i , --binary-image Container image for on-image opm command -c , --container-tool (string) Tool to interact with container images, such as for saving and building: docker or podman . -f , --from-index (string) Index to prune. --generate If enabled, only creates the Dockerfile and saves it to local disk. -d , --out-dockerfile (string) Optional: If generating the Dockerfile, specify a file name. -p , --packages (strings) Comma-separated list of packages to keep. --permissive Allow registry load errors. -t , --tag (string) Custom tag for container image being built. 8.2.2.4. rm Delete an entire Operator from an index. Command syntax USD opm index rm [<flags>] Table 8.9. index rm flags Flag Description -i , --binary-image Container image for on-image opm command -u , --build-tool (string) Tool to build container images: podman (the default value) or docker . Overrides part of the --container-tool flag. -c , --container-tool (string) Tool to interact with container images, such as for saving and building: docker or podman . -f , --from-index (string) index to delete from. --generate If enabled, only creates the Dockerfile and saves it to local disk. -o , --operators (strings) Comma-separated list of Operators to delete. -d , --out-dockerfile (string) Optional: If generating the Dockerfile, specify a file name. -p , --packages (strings) Comma-separated list of packages to keep. --permissive Allow registry load errors. -p , --pull-tool (string) Tool to pull container images: none (the default value), docker , or podman . Overrides part of the --container-tool flag. -t , --tag (string) Custom tag for container image being built. Additional resources Operator Framework packaging format Managing custom catalogs Mirroring images for a disconnected installation using the oc-mirror plugin 8.2.3. init Generate an olm.package declarative config blob. Command syntax USD opm init <package_name> [<flags>] Table 8.10. init flags Flag Description -c , --default-channel (string) The channel that subscriptions will default to if unspecified. -d , --description (string) Path to the Operator's README.md or other documentation. -i , --icon (string) Path to package's icon. -o , --output (string) Output format: json (the default value) or yaml . 8.2.4. migrate Migrate a SQLite database format index image or database file to a file-based catalog. Important The SQLite-based catalog format, including the related CLI commands, is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. Command syntax USD opm migrate <index_ref> <output_dir> [<flags>] Table 8.11. migrate flags Flag Description -o , --output (string) Output format: json (the default value) or yaml . 8.2.5. render Generate a declarative config blob from the provided index images, bundle images, and SQLite database files. Command syntax USD opm render <index_image | bundle_image | sqlite_file> [<flags>] Table 8.12. render flags Flag Description -o , --output (string) Output format: json (the default value) or yaml . 8.2.6. serve Serve declarative configs via a GRPC server. Note The declarative config directory is loaded by the serve command at startup. Changes made to the declarative config after this command starts are not reflected in the served content. Command syntax USD opm serve <source_path> [<flags>] Table 8.13. serve flags Flag Description --cache-dir (string) If this flag is set, it syncs and persists the server cache directory. --cache-enforce-integrity Exits with an error if the cache is not present or is invalidated. The default value is true when the --cache-dir flag is set and the --cache-only flag is false . Otherwise, the default is false . --cache-only Syncs the serve cache and exits without serving. --debug Enables debug logging. h , --help Help for serve. -p , --port (string) The port number for the service. The default value is 50051 . --pprof-addr (string) The address of the startup profiling endpoint. The format is Addr:Port . -t , --termination-log (string) The path to a container termination log file. The default value is /dev/termination-log . 8.2.7. validate Validate the declarative config JSON file(s) in a given directory. Command syntax USD opm validate <directory> [<flags>]
[ "tar xvf <file>", "echo USDPATH", "sudo mv ./opm /usr/local/bin/", "C:\\> path", "C:\\> move opm.exe <directory>", "opm version", "opm <command> [<subcommand>] [<argument>] [<flags>]", "opm generate <subcommand> [<flags>]", "opm generate dockerfile <dcRootDir> [<flags>]", "opm index <subcommand> [<flags>]", "opm index add [<flags>]", "opm index prune [<flags>]", "opm index prune-stranded [<flags>]", "opm index rm [<flags>]", "opm init <package_name> [<flags>]", "opm migrate <index_ref> <output_dir> [<flags>]", "opm render <index_image | bundle_image | sqlite_file> [<flags>]", "opm serve <source_path> [<flags>]", "opm validate <directory> [<flags>]" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/cli_tools/opm-cli
Chapter 1. Installing Ansible plug-ins for Red Hat Developer Hub
Chapter 1. Installing Ansible plug-ins for Red Hat Developer Hub Ansible plug-ins for Red Hat Developer Hub deliver an Ansible-specific portal experience with curated learning paths, push-button content creation, integrated development tools, and other opinionated resources. Important The Ansible plug-ins are a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend using them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information on Red Hat Technology Preview features, see Technology Preview Features Scope . Additional detail on how Red Hat provides support for bundled community dynamic plugins is available on the Red Hat Developer Support Policy page. To install and configure the Ansible plugins, see Installing Ansible plug-ins for Red Hat Developer Hub .
null
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.3/html/configuring_dynamic_plugins/installing-ansible-plug-ins-for-red-hat-developer-hub
Chapter 11. Intercepting Messages
Chapter 11. Intercepting Messages With AMQ Broker you can intercept packets entering or exiting the broker, allowing you to audit packets or filter messages. Interceptors can change the packets they intercept, which makes them powerful, but also potentially dangerous. You can develop interceptors to meet your business requirements. Interceptors are protocol specific and must implement the appropriate interface. Interceptors must implement the intercept() method, which returns a boolean value. If the value is true , the message packet continues onward. If false , the process is aborted, no other interceptors are called, and the message packet is not processed further. 11.1. Creating Interceptors You can create your own incoming and outgoing interceptors. All interceptors are protocol specific and are called for any packet entering or exiting the server respectively. This allows you to create interceptors to meet business requirements such as auditing packets. Interceptors can change the packets they intercept. This makes them powerful as well as potentially dangerous, so be sure to use them with caution. Interceptors and their dependencies must be placed in the Java classpath of the broker. You can use the <broker_instance_dir> /lib directory since it is part of the classpath by default. Procedure The following examples demonstrate how to create an interceptor that checks the size of each packet passed to it. Note that the examples implement a specific interface for each protocol. Implement the appropriate interface and override its intercept() method. If you are using the AMQP protocol, implement the org.apache.activemq.artemis.protocol.amqp.broker.AmqpInterceptor interface. package com.example; import org.apache.activemq.artemis.protocol.amqp.broker.AMQPMessage; import org.apache.activemq.artemis.protocol.amqp.broker.AmqpInterceptor; import org.apache.activemq.artemis.spi.core.protocol.RemotingConnection; public class MyInterceptor implements AmqpInterceptor { private final int ACCEPTABLE_SIZE = 1024; @Override public boolean intercept(final AMQPMessage message, RemotingConnection connection) { int size = message.getEncodeSize(); if (size <= ACCEPTABLE_SIZE) { System.out.println("This AMQPMessage has an acceptable size."); return true; } return false; } } If you are using Core Protocol, your interceptor must implement the org.apache.artemis.activemq.api.core.Interceptor interface. package com.example; import org.apache.artemis.activemq.api.core.Interceptor; import org.apache.activemq.artemis.core.protocol.core.Packet; import org.apache.activemq.artemis.spi.core.protocol.RemotingConnection; public class MyInterceptor implements Interceptor { private final int ACCEPTABLE_SIZE = 1024; @Override boolean intercept(Packet packet, RemotingConnection connection) throws ActiveMQException { int size = packet.getPacketSize(); if (size <= ACCEPTABLE_SIZE) { System.out.println("This Packet has an acceptable size."); return true; } return false; } } If you are using the MQTT protocol, implement the org.apache.activemq.artemis.core.protocol.mqtt.MQTTInterceptor interface. package com.example; import org.apache.activemq.artemis.core.protocol.mqtt.MQTTInterceptor; import io.netty.handler.codec.mqtt.MqttMessage; import org.apache.activemq.artemis.spi.core.protocol.RemotingConnection; public class MyInterceptor implements Interceptor { private final int ACCEPTABLE_SIZE = 1024; @Override boolean intercept(MqttMessage mqttMessage, RemotingConnection connection) throws ActiveMQException { byte[] msg = (mqttMessage.toString()).getBytes(); int size = msg.length; if (size <= ACCEPTABLE_SIZE) { System.out.println("This MqttMessage has an acceptable size."); return true; } return false; } } If you are using the STOMP protocol, implement the org.apache.activemq.artemis.core.protocol.stomp.StompFrameInterceptor interface. package com.example; import org.apache.activemq.artemis.core.protocol.stomp.StompFrameInterceptor; import org.apache.activemq.artemis.core.protocol.stomp.StompFrame; import org.apache.activemq.artemis.spi.core.protocol.RemotingConnection; public class MyInterceptor implements Interceptor { private final int ACCEPTABLE_SIZE = 1024; @Override boolean intercept(StompFrame stompFrame, RemotingConnection connection) throws ActiveMQException { int size = stompFrame.getEncodedSize(); if (size <= ACCEPTABLE_SIZE) { System.out.println("This StompFrame has an acceptable size."); return true; } return false; } } 11.2. Configuring the Broker to Use Interceptors Once you have created an interceptor, you must configure the broker to use it. Prerequisites You must create an interceptor class and add it (and its dependencies) to the Java classpath of the broker before you can configure it for use by the broker. You can use the <broker_instance_dir> /lib directory since it is part of the classpath by default. Procedure Configure the broker to use an interceptor by adding configuration to <broker_instance_dir> /etc/broker.xml If your interceptor is intended for incoming messages, add its class-name to the list of remoting-incoming-interceptors . <configuration> <core> ... <remoting-incoming-interceptors> <class-name>org.example.MyIncomingInterceptor</class-name> </remoting-incoming-interceptors> ... </core> </configuration> If your interceptor is intended for outgoing messages, add its class-name to the list of remoting-outgoing-interceptors . <configuration> <core> ... <remoting-outgoing-interceptors> <class-name>org.example.MyOutgoingInterceptor</class-name> </remoting-outgoing-interceptors> </core> </configuration> Additional resources To learn how to configure interceptors in the AMQ Core Protocol JMS client, see Using message interceptors in the AMQ Core Protocol JMS documentation.
[ "package com.example; import org.apache.activemq.artemis.protocol.amqp.broker.AMQPMessage; import org.apache.activemq.artemis.protocol.amqp.broker.AmqpInterceptor; import org.apache.activemq.artemis.spi.core.protocol.RemotingConnection; public class MyInterceptor implements AmqpInterceptor { private final int ACCEPTABLE_SIZE = 1024; @Override public boolean intercept(final AMQPMessage message, RemotingConnection connection) { int size = message.getEncodeSize(); if (size <= ACCEPTABLE_SIZE) { System.out.println(\"This AMQPMessage has an acceptable size.\"); return true; } return false; } }", "package com.example; import org.apache.artemis.activemq.api.core.Interceptor; import org.apache.activemq.artemis.core.protocol.core.Packet; import org.apache.activemq.artemis.spi.core.protocol.RemotingConnection; public class MyInterceptor implements Interceptor { private final int ACCEPTABLE_SIZE = 1024; @Override boolean intercept(Packet packet, RemotingConnection connection) throws ActiveMQException { int size = packet.getPacketSize(); if (size <= ACCEPTABLE_SIZE) { System.out.println(\"This Packet has an acceptable size.\"); return true; } return false; } }", "package com.example; import org.apache.activemq.artemis.core.protocol.mqtt.MQTTInterceptor; import io.netty.handler.codec.mqtt.MqttMessage; import org.apache.activemq.artemis.spi.core.protocol.RemotingConnection; public class MyInterceptor implements Interceptor { private final int ACCEPTABLE_SIZE = 1024; @Override boolean intercept(MqttMessage mqttMessage, RemotingConnection connection) throws ActiveMQException { byte[] msg = (mqttMessage.toString()).getBytes(); int size = msg.length; if (size <= ACCEPTABLE_SIZE) { System.out.println(\"This MqttMessage has an acceptable size.\"); return true; } return false; } }", "package com.example; import org.apache.activemq.artemis.core.protocol.stomp.StompFrameInterceptor; import org.apache.activemq.artemis.core.protocol.stomp.StompFrame; import org.apache.activemq.artemis.spi.core.protocol.RemotingConnection; public class MyInterceptor implements Interceptor { private final int ACCEPTABLE_SIZE = 1024; @Override boolean intercept(StompFrame stompFrame, RemotingConnection connection) throws ActiveMQException { int size = stompFrame.getEncodedSize(); if (size <= ACCEPTABLE_SIZE) { System.out.println(\"This StompFrame has an acceptable size.\"); return true; } return false; } }", "<configuration> <core> <remoting-incoming-interceptors> <class-name>org.example.MyIncomingInterceptor</class-name> </remoting-incoming-interceptors> </core> </configuration>", "<configuration> <core> <remoting-outgoing-interceptors> <class-name>org.example.MyOutgoingInterceptor</class-name> </remoting-outgoing-interceptors> </core> </configuration>" ]
https://docs.redhat.com/en/documentation/red_hat_amq_broker/7.10/html/configuring_amq_broker/interceptors
Part I. System Logins
Part I. System Logins This part provides instruction on how to configure system authentication with the use of the authconfig , ipa-client-install , and realmd tools.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/system-level_authentication_guide/system-logins
Chapter 3. Installing the Red Hat JBoss Web Server 6.0
Chapter 3. Installing the Red Hat JBoss Web Server 6.0 You can install the JBoss Web Server 6.0 on Red Hat Enterprise Linux or Microsoft Windows. For more information see the following sections of the installation guide: Installing JBoss Web Server on Red Hat Enterprise Linux from archive files Installing JBoss Web Server on Red Hat Enterprise Linux from RPM packages Installing JBoss Web Server on Microsoft Windows
null
https://docs.redhat.com/en/documentation/red_hat_jboss_web_server/6.0/html/red_hat_jboss_web_server_6.0_service_pack_4_release_notes/installing_the_red_hat_jboss_web_server_6_0
Chapter 2. Deploy using dynamic storage devices
Chapter 2. Deploy using dynamic storage devices Deploying OpenShift Data Foundation on OpenShift Container Platform using dynamic storage devices provided by VMware vSphere (disk format: thin) provides you with the option to create internal cluster resources. This will result in the internal provisioning of the base services, which helps to make additional storage classes available to applications. Note Both internal and external OpenShift Data Foundation clusters are supported on VMware vSphere. See Planning your deployment for more information about deployment requirements. Also, ensure that you have addressed the requirements in Preparing to deploy OpenShift Data Foundation chapter before proceeding with the below steps for deploying using dynamic storage devices: Install the Red Hat OpenShift Data Foundation Operator . Create an OpenShift Data Foundation Cluster . 2.1. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.17 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 2.2. Enabling cluster-wide encryption with KMS using the Token authentication method You can enable the key value backend path and policy in the vault for token authentication. Prerequisites Administrator access to the vault. A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . Carefully, select a unique path name as the backend path that follows the naming convention since you cannot change it later. Procedure Enable the Key/Value (KV) backend path in the vault. For vault KV secret engine API, version 1: For vault KV secret engine API, version 2: Create a policy to restrict the users to perform a write or delete operation on the secret: Create a token that matches the above policy: 2.3. Enabling cluster-wide encryption with KMS using the Kubernetes authentication method You can enable the Kubernetes authentication method for cluster-wide encryption using the Key Management System (KMS). Prerequisites Administrator access to Vault. A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . The OpenShift Data Foundation operator must be installed from the Operator Hub. Select a unique path name as the backend path that follows the naming convention carefully. You cannot change this path name later. Procedure Create a service account: where, <serviceaccount_name> specifies the name of the service account. For example: Create clusterrolebindings and clusterroles : For example: Create a secret for the serviceaccount token and CA certificate. where, <serviceaccount_name> is the service account created in the earlier step. Get the token and the CA certificate from the secret. Retrieve the OCP cluster endpoint. Fetch the service account issuer: Use the information collected in the step to setup the Kubernetes authentication method in Vault: Important To configure the Kubernetes authentication method in Vault when the issuer is empty: Enable the Key/Value (KV) backend path in Vault. For Vault KV secret engine API, version 1: For Vault KV secret engine API, version 2: Create a policy to restrict the users to perform a write or delete operation on the secret: Generate the roles: The role odf-rook-ceph-op is later used while you configure the KMS connection details during the creation of the storage system. 2.3.1. Enabling key rotation when using KMS Security common practices require periodic encryption key rotation. You can enable key rotation when using KMS using this procedure. To enable key rotation, add the annotation keyrotation.csiaddons.openshift.io/schedule: <value> to either Namespace , StorageClass , or PersistentVolumeClaims (in order of precedence). <value> can be either @hourly , @daily , @weekly , @monthly , or @yearly . If <value> is empty, the default is @weekly . The below examples use @weekly . Important Key rotation is only supported for RBD backed volumes. Annotating Namespace Annotating StorageClass Annotating PersistentVolumeClaims 2.4. Creating an OpenShift Data Foundation cluster Create an OpenShift Data Foundation cluster after you install the OpenShift Data Foundation operator. Prerequisites The OpenShift Data Foundation operator must be installed from the Operator Hub. For more information, see Installing OpenShift Data Foundation Operator . For VMs on VMware, ensure the disk.EnableUUID option is set to TRUE . You need to have vCenter account privileges to configure the VMs. For more information, see Required vCenter account privileges . To set the disk.EnableUUID option, use the Advanced option of the VM Options in the Customize hardware tab . For more information, see Installing on vSphere . Optional: If you want to use thick-provisioned storage for flexibility, you must create a storage class with zeroedthick or eagerzeroedthick disk format. For information, see VMware vSphere object definition . Procedure In the OpenShift Web Console, click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click on the OpenShift Data Foundation operator, and then click Create StorageSystem . In the Backing storage page, select the following: Select Full Deployment for the Deployment type option. Select the Use an existing StorageClass option. Select the Storage Class . By default, it is set to thin . If you have created a storage class with zeroedthick or eagerzeroedthick disk format for thick-provisioned storage, then that storage class is listed in addition to the default, thin storage class. Optional: Select Use external PostgreSQL checkbox to use an external PostgreSQL [Technology preview] . This provides high availability solution for Multicloud Object Gateway where the PostgreSQL pod is a single point of failure. Provide the following connection details: Username Password Server name and Port Database name Select Enable TLS/SSL checkbox to enable encryption for the Postgres server. Click . In the Capacity and nodes page, provide the necessary information: Select a value for Requested Capacity from the dropdown list. It is set to 2 TiB by default. Note Once you select the initial storage capacity, cluster expansion is performed only using the selected usable capacity (three times of raw storage). In the Select Nodes section, select at least three available nodes. In the Configure performance section, select one of the following performance profiles: Lean Use this in a resource constrained environment with minimum resources that are lower than the recommended. This profile minimizes resource consumption by allocating fewer CPUs and less memory. Balanced (default) Use this when recommended resources are available. This profile provides a balance between resource consumption and performance for diverse workloads. Performance Use this in an environment with sufficient resources to get the best performance. This profile is tailored for high performance by allocating ample memory and CPUs to ensure optimal execution of demanding workloads. Note You have the option to configure the performance profile even after the deployment using the Configure performance option from the options menu of the StorageSystems tab. Important Before selecting a resource profile, make sure to check the current availability of resources within the cluster. Opting for a higher resource profile in a cluster with insufficient resources might lead to installation failures. For more information about resource requirements, see Resource requirement for performance profiles . Optional: Select the Taint nodes checkbox to dedicate the selected nodes for OpenShift Data Foundation. Spread the worker nodes across three different physical nodes, racks, or failure domains for high availability. Use vCenter anti-affinity to align OpenShift Data Foundation rack labels with physical nodes and racks in the data center to avoid scheduling two worker nodes on the same physical chassis. If the nodes selected do not match the OpenShift Data Foundation cluster requirement of the aggregated 30 CPUs and 72 GiB of RAM, a minimal cluster is deployed. For minimum starting node requirements, see the Resource requirements section in the Planning guide. Select the Taint nodes checkbox to make selected nodes dedicated for OpenShift Data Foundation. Click . Optional: In the Security and network page, configure the following based on your requirements: To enable encryption, select Enable data encryption for block and file storage . Select either one or both the encryption levels: Cluster-wide encryption Encrypts the entire cluster (block and file). StorageClass encryption Creates encrypted persistent volume (block only) using encryption enabled storage class. Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, select one of the following providers and provide the necessary details: Vault Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save . Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save . Thales CipherTrust Manager (using KMIP) Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . To enable in-transit encryption, select In-transit encryption . Select a Network . Click . In the Data Protection page, if you are configuring Regional-DR solution for Openshift Data Foundation then select the Prepare cluster for disaster recovery (Regional-DR only) checkbox, else click . In the Review and create page, review the configuration details. To modify any configuration settings, click Back . Click Create StorageSystem . Note When your deployment has five or more nodes, racks, or rooms, and when there are five or more number of failure domains present in the deployment, you can configure Ceph monitor counts based on the number of racks or zones. An alert is displayed in the notification panel or Alert Center of the OpenShift Web Console to indicate the option to increase the number of Ceph monitor counts. You can use the Configure option in the alert to configure the Ceph monitor counts. For more information, see Resolving low Ceph monitor count alert . Verification steps To verify the final Status of the installed storage cluster: In the OpenShift Web Console, navigate to Installed Operators OpenShift Data Foundation Storage System ocs-storagecluster-storagesystem Resources . Verify that Status of StorageCluster is Ready and has a green tick mark to it. To verify that all components for OpenShift Data Foundation are successfully installed, see Verifying your OpenShift Data Foundation deployment . Additional resources To enable Overprovision Control alerts, refer to Alerts in Monitoring guide.
[ "oc annotate namespace openshift-storage openshift.io/node-selector=", "vault secrets enable -path=odf kv", "vault secrets enable -path=odf kv-v2", "echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -", "vault token create -policy=odf -format json", "oc -n openshift-storage create serviceaccount <serviceaccount_name>", "oc -n openshift-storage create serviceaccount odf-vault-auth", "oc -n openshift-storage create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:_<serviceaccount_name>_", "oc -n openshift-storage create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:odf-vault-auth", "cat <<EOF | oc create -f - apiVersion: v1 kind: Secret metadata: name: odf-vault-auth-token namespace: openshift-storage annotations: kubernetes.io/service-account.name: <serviceaccount_name> type: kubernetes.io/service-account-token data: {} EOF", "SA_JWT_TOKEN=USD(oc -n openshift-storage get secret odf-vault-auth-token -o jsonpath=\"{.data['token']}\" | base64 --decode; echo) SA_CA_CRT=USD(oc -n openshift-storage get secret odf-vault-auth-token -o jsonpath=\"{.data['ca\\.crt']}\" | base64 --decode; echo)", "OCP_HOST=USD(oc config view --minify --flatten -o jsonpath=\"{.clusters[0].cluster.server}\")", "oc proxy & proxy_pid=USD! issuer=\"USD( curl --silent http://127.0.0.1:8001/.well-known/openid-configuration | jq -r .issuer)\" kill USDproxy_pid", "vault auth enable kubernetes", "vault write auth/kubernetes/config token_reviewer_jwt=\"USDSA_JWT_TOKEN\" kubernetes_host=\"USDOCP_HOST\" kubernetes_ca_cert=\"USDSA_CA_CRT\" issuer=\"USDissuer\"", "vault write auth/kubernetes/config token_reviewer_jwt=\"USDSA_JWT_TOKEN\" kubernetes_host=\"USDOCP_HOST\" kubernetes_ca_cert=\"USDSA_CA_CRT\"", "vault secrets enable -path=odf kv", "vault secrets enable -path=odf kv-v2", "echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -", "vault write auth/kubernetes/role/odf-rook-ceph-op bound_service_account_names=rook-ceph-system,rook-ceph-osd,noobaa bound_service_account_namespaces=openshift-storage policies=odf ttl=1440h", "vault write auth/kubernetes/role/odf-rook-ceph-osd bound_service_account_names=rook-ceph-osd bound_service_account_namespaces=openshift-storage policies=odf ttl=1440h", "oc get namespace default NAME STATUS AGE default Active 5d2h", "oc annotate namespace default \"keyrotation.csiaddons.openshift.io/schedule=@weekly\" namespace/default annotated", "oc get storageclass rbd-sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE rbd-sc rbd.csi.ceph.com Delete Immediate true 5d2h", "oc annotate storageclass rbd-sc \"keyrotation.csiaddons.openshift.io/schedule=@weekly\" storageclass.storage.k8s.io/rbd-sc annotated", "oc get pvc data-pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE data-pvc Bound pvc-f37b8582-4b04-4676-88dd-e1b95c6abf74 1Gi RWO default 20h", "oc annotate pvc data-pvc \"keyrotation.csiaddons.openshift.io/schedule=@weekly\" persistentvolumeclaim/data-pvc annotated", "oc get encryptionkeyrotationcronjobs.csiaddons.openshift.io NAME SCHEDULE SUSPEND ACTIVE LASTSCHEDULE AGE data-pvc-1642663516 @weekly 3s", "oc annotate pvc data-pvc \"keyrotation.csiaddons.openshift.io/schedule=*/1 * * * *\" --overwrite=true persistentvolumeclaim/data-pvc annotated", "oc get encryptionkeyrotationcronjobs.csiaddons.openshift.io NAME SCHEDULE SUSPEND ACTIVE LASTSCHEDULE AGE data-pvc-1642664617 */1 * * * * 3s" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/deploying_openshift_data_foundation_on_vmware_vsphere/deploy-using-dynamic-storage-devices-vmware
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback: For simple comments on specific passages: Make sure you are viewing the documentation in the HTML format. In addition, ensure you see the Feedback button in the upper right corner of the document. Use your mouse cursor to highlight the part of text that you want to comment on. Click the Add Feedback pop-up that appears below the highlighted text. Follow the displayed instructions. For submitting more complex feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/deploying_openshift_data_foundation_using_red_hat_virtualization_platform/providing-feedback-on-red-hat-documentation_rhodf
Chapter 7. Deleting virtual machines
Chapter 7. Deleting virtual machines To delete virtual machines in RHEL 9, use the command line or the web console GUI . 7.1. Deleting virtual machines by using the command line To delete a virtual machine (VM), you can remove its XML configuration and associated storage files from the host by using the command line. Follow the procedure below: Prerequisites Back up important data from the VM. Shut down the VM. Make sure no other VMs use the same associated storage. Procedure Use the virsh undefine utility. For example, the following command removes the guest1 VM, its associated storage volumes, and non-volatile RAM, if any. Additional resources virsh undefine --help command virsh man page on your system 7.2. Deleting virtual machines by using the web console To delete a virtual machine (VM) and its associated storage files from the host to which the RHEL 9 web console is connected with, follow the procedure below: Prerequisites You have installed the RHEL 9 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . You have installed the RHEL 9 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . The web console VM plug-in is installed on your system . Back up important data from the VM. Make sure no other VM uses the same associated storage. Optional: Shut down the VM. Procedure Log in to the RHEL 9 web console. For details, see Logging in to the web console . In the Virtual Machines interface, click the Menu button ... of the VM that you want to delete. A drop down menu appears with controls for various VM operations. Click Delete . A confirmation dialog appears. Optional: To delete all or some of the storage files associated with the VM, select the checkboxes to the storage files you want to delete. Click Delete . The VM and any selected storage files are deleted.
[ "virsh undefine guest1 --remove-all-storage --nvram Domain 'guest1' has been undefined Volume 'vda'(/home/images/guest1.qcow2) removed." ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_virtualization/assembly_deleting-virtual-machines_configuring-and-managing-virtualization
Chapter 1. Machine APIs
Chapter 1. Machine APIs 1.1. ContainerRuntimeConfig [machineconfiguration.openshift.io/v1] Description ContainerRuntimeConfig describes a customized Container Runtime configuration. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.2. ControllerConfig [machineconfiguration.openshift.io/v1] Description ControllerConfig describes configuration for MachineConfigController. This is currently only used to drive the MachineConfig objects generated by the TemplateController. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.3. ControlPlaneMachineSet [machine.openshift.io/v1] Description ControlPlaneMachineSet ensures that a specified number of control plane machine replicas are running at any given time. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.4. KubeletConfig [machineconfiguration.openshift.io/v1] Description KubeletConfig describes a customized Kubelet configuration. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.5. MachineConfig [machineconfiguration.openshift.io/v1] Description MachineConfig defines the configuration for a machine Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.6. MachineConfigPool [machineconfiguration.openshift.io/v1] Description MachineConfigPool describes a pool of MachineConfigs. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.7. MachineHealthCheck [machine.openshift.io/v1beta1] Description MachineHealthCheck is the Schema for the machinehealthchecks API Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object 1.8. Machine [machine.openshift.io/v1beta1] Description Machine is the Schema for the machines API Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object 1.9. MachineSet [machine.openshift.io/v1beta1] Description MachineSet ensures that a specified number of machines replicas are running at any given time. Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/machine_apis/machine-apis
Chapter 1. New features and enhancements
Chapter 1. New features and enhancements Red Hat JBoss Core Services (JBCS) 2.4.57 Service Pack 5 includes the following new features and enhancements. 1.1. JBCS support for UnsafeAllow3F flag for URL rewrites JBCS 2.4.57 Service Pack 5 introduces support for the UnsafeAllow3F flag, which you can specify as part of the RewriteRule directive of the mod_rewrite module. You must set the UnsafeAllow3F flag if you want to allow a URL rewrite to continue when the HTTP request has an encoded question mark, %3f , and the rewritten result has a ? character in the substitution. This flag protects the HTTP request from a malicious URL that could take advantage of a capture and re-substitution of the encoded question mark. For more information, see RewriteRule Flags: UnsafeAllow3F . 1.2. JBCS support for UnsafePrefixStat flag for URL rewrites JBCS 2.4.57 Service Pack 5 introduces support for the UnsafePrefixStat flag, which you can specify as part of the RewriteRule directive of the mod_rewrite module. You must set the UnsafePrefixStat flag in server-scoped substitutions that start with a variable or back-reference and resolve to a file-system path. These substitutions are not prefixed with the document root. This flag protects the HTTP request from a malicious URL that could cause the expanded substitution to map to an unexpected file-system location. For more information, see RewriteRule Flags: UnsafePrefixStat .
null
https://docs.redhat.com/en/documentation/red_hat_jboss_core_services/2.4.57/html/red_hat_jboss_core_services_apache_http_server_2.4.57_service_pack_5_release_notes/new_features_and_enhancements
Chapter 17. Database Servers
Chapter 17. Database Servers This chapter guides you through the installation and configuration of the MariaDB server, which is an open source fast and robust database server that is based on MySQL technology. The chapter also describes how to back up MariaDB data. 17.1. MariaDB MariaDB is a relational database which converts data into structured information and provides an SQL interface for accessing data. It includes multiple storage engines and plug-ins, as well as geographic information system (GIS). Red Hat Enterprise Linux 7 contains MariaDB 5.5 as the default implementation of a server from the MySQL databases family. Later versions of the MariaDB database server are available as Software Collections for Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7. For more information about the latest versions, see the Release Notes for Red Hat Software Collections . 17.1.1. Installing the MariaDB server To install the MariaDB server, follow this procedure: Installing the MariaDB server Ensure that the mariadb and mariadb-server packages are installed on the required server: Start the mariadb service: Enable the mariadb service to start at boot: 17.1.1.1. Improving MariaDB installation security You can improve security when installing the MariaDB server by running the mysql_secure_installation command: This command launches a fully interactive script, which prompts for each step in the process. The script enables to improve security in the following ways: Setting a password for root accounts Removing anonymous users Disallowing remote (outside the local host) root logins Removing test database 17.1.2. Configuring the MariaDB server for networking To configure the MariaDB server for networking, use the [mysqld] section of the /etc/my.cnf.d/server.cnf file, where you can set the following configuration directives: bind-address Bind-address is the address on which the server will listen. Possible options are: a host name, an IPv4 address, or an IPv6 address. skip-networking Possible values are: 0 - to listen for all clients 1 - to listen for local clients only port The port on which MariaDB listens for TCP/IP connections. 17.1.3. Backing up MariaDB data There are two main ways to back up data from a MariaDB database: Logical backup Physical backup 17.1.3.1. Logical back up Logical backup consists of the SQL statements necessary to restore the data. This type of backup exports information and records in plain text files. The main advantage of logical backup over physical backup is portability and flexibility. The data can be restored on other hardware configurations, MariaDB versions or Database Management System (DBMS), which is not possible with physical backups. Warning Logical backup can be performed only if the mariadb.service is running. Logical backup does not include log and configuration files. 17.1.3.2. Physical back up Physical backup consists of copies of files and directories that store the content. Physical backup has the following advantages compared to logical backup: Output is more compact. Backup is smaller in size. Backup and restore are faster. Backup includes log and configuration files. Warning Physical backup must be performed when the mariadb.service is not running or all tables in the database are locked to prevent changes during the backup.
[ "~]# yum install mariadb mariadb-server", "~]# systemctl start mariadb.service", "~]# systemctl enable mariadb.service", "~]# mysql_secure_installation" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/system_administrators_guide/ch-database_servers
Chapter 1. Connecting RHEL systems directly to AD using SSSD
Chapter 1. Connecting RHEL systems directly to AD using SSSD To connect a RHEL system to Active Directory (AD), use: System Security Services Daemon (SSSD) for identity and authentication realmd to detect available domains and configure the underlying RHEL system services. 1.1. Overview of direct integration using SSSD You use SSSD to access a user directory for authentication and authorization through a common framework with user caching to permit offline logins. SSSD is highly configurable; it provides Pluggable Authentication Modules (PAM) and Name Switch Service (NSS) integration and a database to store local users as well as extended user data retrieved from a central server. SSSD is the recommended component to connect a RHEL system with one of the following types of identity server: Active Directory Identity Management (IdM) in RHEL Any generic LDAP or Kerberos server Note Direct integration with SSSD works only within a single AD forest by default. The most convenient way to configure SSSD to directly integrate a Linux system with AD is to use the realmd service. It allows callers to configure network authentication and domain membership in a standard way. The realmd service automatically discovers information about accessible domains and realms and does not require advanced configuration to join a domain or realm. You can use SSSD for both direct and indirect integration with AD and it allows you to switch from one integration approach to another. Direct integration is a simple way to introduce RHEL systems to an AD environment. However, as the share of RHEL systems grows, your deployments usually need a better centralized management of the identity-related policies such as host-based access control, sudo, or SELinux user mappings. Initially, you can maintain the configuration of these aspects of the RHEL systems in local configuration files. However, with a growing number of systems, distribution and management of the configuration files is easier with a provisioning system such as Red Hat Satellite. When direct integration does not scale anymore, you should consider indirect integration. For more information about moving from direct integration (RHEL clients are in the AD domain) to indirect integration (IdM with trust to AD), see Moving RHEL clients from AD domain to IdM Server. Additional resources realm(8) , sssd-ad(5) , and sssd(8) man pages on your system Deciding between indirect and direct integration 1.2. Supported Windows platforms for direct integration You can directly integrate your RHEL system with Active Directory forests that use the following forest and domain functional levels: Forest functional level range: Windows Server 2008 - Windows Server 2016 Domain functional level range: Windows Server 2008 - Windows Server 2016 Direct integration has been tested on the following supported operating systems: Windows Server 2022 (RHEL 8.7 or later) Windows Server 2019 Windows Server 2016 Windows Server 2012 R2 Note Windows Server 2019 and Windows Server 2022 do not introduce a new functional level. The highest functional level Windows Server 2019 and Windows Server 2022 use is Windows Server 2016. 1.3. Connecting directly to AD The System Security Services Daemon (SSSD) is the recommended component to connect a Red Hat Enterprise Linux (RHEL) system with Active Directory (AD). You can integrate directly with AD by using either POSIX ID mapping, which is the default for SSSD, or by using POSIX attributes defined in AD. Important Before joining your system to AD, ensure you configured your system correctly by following the procedure in the Red Hat Knowledgebase solution Basic Prechecks Steps: RHEL Join With Active Directory using 'adcli', 'realm' and 'net' commands . 1.3.1. Options for integrating with AD: using POSIX ID mapping or POSIX attributes Linux and Windows systems use different identifiers for users and groups: Linux uses user IDs (UID) and group IDs (GID). See Introduction to managing user and group accounts in Configuring Basic System Settings . Linux UIDs and GIDs are compliant with the POSIX standard. Windows use security IDs (SID). Important After connecting a RHEL system to AD, you can authenticate with your AD username and password. Do not create a Linux user with the same name as a Windows user, as duplicate names might cause a conflict and interrupt the authentication process. To authenticate to a RHEL system as an AD user, you must have a UID and GID assigned. SSSD provides the option to integrate with AD either using POSIX ID mapping or POSIX attributes in AD. The default is to use POSIX ID mapping. 1.3.2. Connecting to AD using POSIX ID mapping SSSD uses the SID of an AD user to algorithmically generate POSIX IDs in a process called POSIX ID mapping. POSIX ID mapping creates an association between SIDs in AD and IDs on Linux. When SSSD detects a new AD domain, it assigns a range of available IDs to the new domain. When an AD user logs in to an SSSD client machine for the first time, SSSD creates an entry for the user in the SSSD cache, including a UID based on the user's SID and the ID range for that domain. Because the IDs for an AD user are generated in a consistent way from the same SID, the user has the same UID and GID when logging in to any RHEL system. Note When all client systems use SSSD to map SIDs to Linux IDs, the mapping is consistent. If some clients use different software, choose one of the following: Ensure that the same mapping algorithm is used on all clients. Use explicit POSIX attributes defined in AD. For more information, see the section on ID mapping in the sssd-ad man page. 1.3.2.1. Discovering and joining an AD Domain using SSSD Follow this procedure to discover an AD domain and connect a RHEL system to that domain using SSSD. Prerequisites Ensure that the required ports are open: Ports required for direct integration of RHEL systems into AD using SSSD Ensure that you are using the AD domain controller server for DNS. Verify that the system time on both systems is synchronized. This ensures that Kerberos is able to work correctly. Procedure Install the following packages: To display information for a specific domain, run realm discover and add the name of the domain you want to discover: The realmd system uses DNS SRV lookups to find the domain controllers in this domain automatically. Note The realmd system can discover both Active Directory and Identity Management domains. If both domains exist in your environment, you can limit the discovery results to a specific type of server using the --server-software=active-directory option. Configure the local RHEL system with the realm join command. The realmd suite edits all required configuration files automatically. For example, for a domain named ad.example.com : Verification Display an AD user details, such as the administrator user: Additional resources realm(8) and nmcli(1) man pages on your system 1.3.3. Connecting to AD using POSIX attributes defined in Active Directory AD can create and store POSIX attributes, such as uidNumber , gidNumber , unixHomeDirectory , or loginShell . When using POSIX ID mapping, SSSD creates new UIDs and GIDs, which overrides the values defined in AD. To keep the AD-defined values, you must disable POSIX ID mapping in SSSD. For best performance, publish the POSIX attributes to the AD global catalog. If POSIX attributes are not present in the global catalog, SSSD connects to the individual domain controllers directly on the LDAP port. Prerequisites Ensure that the required ports are open: Ports required for direct integration of RHEL systems into AD using SSSD Ensure that you are using the AD domain controller server for DNS. Verify that the system time on both systems is synchronized. This ensures that Kerberos is able to work correctly. Procedure Install the following packages: Configure the local RHEL system with POSIX ID mapping disabled using the realm join command with the --automatic-id-mapping=no option. The realmd suite edits all required configuration files automatically. For example, for a domain named ad.example.com : If you already joined a domain, you can manually disable POSIX ID Mapping in SSSD: Open the /etc/sssd/sssd.conf file. In the AD domain section, add the ldap_id_mapping = false setting. Remove the SSSD caches: Restart SSSD: SSSD now uses POSIX attributes from AD, instead of creating them locally. Note You must have the relevant POSIX attributes ( uidNumber , gidNumber , unixHomeDirectory , and loginShell ) configured for the users in AD. Verification Display an AD user details, such as the administrator user: Additional resources sssd-ldap(8) man page on your system 1.3.4. Connecting to multiple domains in different AD forests with SSSD You can use an Active Directory (AD) Managed Service Account (MSA) to access AD domains from different forests where there is no trust between them. See Accessing AD with a Managed Service Account . 1.4. How the AD provider handles dynamic DNS updates Active Directory (AD) actively maintains its DNS records by timing out ( aging ) and removing ( scavenging ) inactive records. By default, the SSSD service refreshes a RHEL client's DNS record at the following intervals: Every time the identity provider comes online. Every time the RHEL system reboots. At the interval specified by the dyndns_refresh_interval option in the /etc/sssd/sssd.conf configuration file. The default value is 86400 seconds (24 hours). Note If you set the dyndns_refresh_interval option to the same interval as the DHCP lease, you can update the DNS record after the IP lease is renewed. SSSD sends dynamic DNS updates to the AD server using Kerberos/GSSAPI for DNS (GSS-TSIG). This means that you only need to enable secure connections to AD. Additional resources sssd-ad(5) man page on your system 1.5. Modifying dynamic DNS settings for the AD provider The System Security Services Daemon (SSSD) service refreshes the DNS record of a Red Hat Enterprise Linux (RHEL) client joined to an AD environment at default intervals. The following procedure adjusts these intervals. Prerequisites You have joined a RHEL host to an Active Directory environment with the SSSD service. You need root permissions to edit the /etc/sssd/sssd.conf configuration file. Procedure Open the /etc/sssd/sssd.conf configuration file in a text editor. Add the following options to the [domain] section for your AD domain to set the DNS record refresh interval to 12 hours, disable updating PTR records, and set the DNS record Time To Live (TTL) to 1 hour. Save and close the /etc/sssd/sssd.conf configuration file. Restart the SSSD service to load the configuration changes. Note You can disable dynamic DNS updates by setting the dyndns_update option in the sssd.conf file to false : Additional resources How the AD provider handles dynamic DNS updates sssd-ad(5) man page on your system 1.6. How the AD provider handles trusted domains If you set the id_provider = ad option in the /etc/sssd/sssd.conf configuration file, SSSD handles trusted domains as follows: SSSD only supports domains in a single AD forest. If SSSD requires access to multiple domains from multiple forests, consider using IPA with trusts (preferred) or the winbindd service instead of SSSD. By default, SSSD discovers all domains in the forest and, if a request for an object in a trusted domain arrives, SSSD tries to resolve it. If the trusted domains are not reachable or geographically distant, which makes them slow, you can set the ad_enabled_domains parameter in /etc/sssd/sssd.conf to limit from which trusted domains SSSD resolves objects. By default, you must use fully-qualified user names to resolve users from trusted domains. Additional resources sssd.conf(5) man page on your system 1.7. Overriding Active Directory site autodiscovery with SSSD Active Directory (AD) forests can be very large, with numerous different domain controllers, domains, child domains and physical sites. AD uses the concept of sites to identify the physical location for its domain controllers. This enables clients to connect to the domain controller that is geographically closest, which increases client performance. This section describes how SSSD uses autodiscovery to find an AD site to connect to, and how you can override autodiscovery and specify a site manually. 1.7.1. How SSSD handles AD site autodiscovery By default, SSSD clients use autodiscovery to find its AD site and connect to the closest domain controller. The process consists of these steps: SSSD performs an SRV query to find Domain Controllers (DCs) in the domain. SSSD reads the discovery domain from the dns_discovery_domain or the ad_domain options in the SSSD configuration file. SSSD performs Connection-Less LDAP (CLDAP) pings to these DCs in 3 batches to avoid pinging too many DCs and avoid timeouts from unreachable DCs. If SSSD receives site and forest information during any of these batches, it skips the rest of the batches. SSSD creates and saves a list of site-specific and backup servers. 1.7.2. Overriding AD site autodiscovery To override the autodiscovery process, specify the AD site to which you want the client to connect by adding the ad_site option to the [domain] section of the /etc/sssd/sssd.conf file. This example configures the client to connect to the ExampleSite AD site. Prerequisites You have joined a RHEL host to an Active Directory environment using the SSSD service. You can authenticate as the root user so you can edit the /etc/sssd/sssd.conf configuration file. Procedure Open the /etc/sssd/sssd.conf file in a text editor. Add the ad_site option to the [domain] section for your AD domain: Save and close the /etc/sssd/sssd.conf configuration file. Restart the SSSD service to load the configuration changes: 1.8. realm commands The realmd system has two major task areas: Managing system enrollment in a domain. Controlling which domain users are allowed to access local system resources. In realmd use the command line tool realm to run commands. Most realm commands require the user to specify the action that the utility should perform, and the entity, such as a domain or user account, for which to perform the action. Table 1.1. realmd commands Command Description Realm Commands discover Run a discovery scan for domains on the network. join Add the system to the specified domain. leave Remove the system from the specified domain. list List all configured domains for the system or all discovered and configured domains. Login Commands permit Enable access for specific users or for all users within a configured domain to access the local system. deny Restrict access for specific users or for all users within a configured domain to access the local system. Additional resources realm(8) man page on your system 1.9. Ports required for direct integration of RHEL systems into AD using SSSD The following ports must be open and accessible to the AD domain controllers and the RHEL host. Table 1.2. Ports Required for Direct Integration of Linux Systems into AD Using SSSD Service Port Protocol Notes DNS 53 UDP and TCP LDAP 389 UDP and TCP LDAPS 636 TCP Optional Samba 445 UDP and TCP For AD Group Policy Objects (GPOs) Kerberos 88 UDP and TCP Kerberos 464 UDP and TCP Used by kadmin for setting and changing a password LDAP Global Catalog 3268 TCP If the id_provider = ad option is being used LDAPS Global Catalog 3269 TCP Optional NTP 123 UDP Optional NTP 323 UDP Optional
[ "yum install samba-common-tools realmd oddjob oddjob-mkhomedir sssd adcli krb5-workstation", "realm discover ad.example.com ad.example.com type: kerberos realm-name: AD.EXAMPLE.COM domain-name: ad.example.com configured: no server-software: active-directory client-software: sssd required-package: oddjob required-package: oddjob-mkhomedir required-package: sssd required-package: adcli required-package: samba-common", "realm join ad.example.com", "getent passwd [email protected] [email protected]:*:1450400500:1450400513:Administrator:/home/[email protected]:/bin/bash", "yum install realmd oddjob oddjob-mkhomedir sssd adcli krb5-workstation", "realm join --automatic-id-mapping=no ad.example.com", "rm -f /var/lib/sss/db/*", "systemctl restart sssd", "getent passwd [email protected] [email protected]:*:10000:10000:Administrator:/home/Administrator:/bin/bash", "[domain/ ad.example.com ] id_provider = ad dyndns_refresh_interval = 43200 dyndns_update_ptr = false dyndns_ttl = 3600", "systemctl restart sssd", "[domain/ ad.example.com ] id_provider = ad dyndns_update = false", "[domain/ad.example.com] id_provider = ad ad_site = ExampleSite", "systemctl restart sssd" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/integrating_rhel_systems_directly_with_windows_active_directory/connecting-rhel-systems-directly-to-ad-using-sssd_integrating-rhel-systems-directly-with-active-directory
Chapter 4. Kernel Module Management Operator
Chapter 4. Kernel Module Management Operator Learn about the Kernel Module Management (KMM) Operator and how you can use it to deploy out-of-tree kernel modules and device plugins on OpenShift Container Platform clusters. 4.1. About the Kernel Module Management Operator The Kernel Module Management (KMM) Operator manages, builds, signs, and deploys out-of-tree kernel modules and device plugins on OpenShift Container Platform clusters. KMM adds a new Module CRD which describes an out-of-tree kernel module and its associated device plugin. You can use Module resources to configure how to load the module, define ModuleLoader images for kernel versions, and include instructions for building and signing modules for specific kernel versions. KMM is designed to accommodate multiple kernel versions at once for any kernel module, allowing for seamless node upgrades and reduced application downtime. 4.2. Installing the Kernel Module Management Operator As a cluster administrator, you can install the Kernel Module Management (KMM) Operator by using the OpenShift CLI or the web console. The KMM Operator is supported on OpenShift Container Platform 4.12 and later. Installing KMM on version 4.11 does not require specific additional steps. For details on installing KMM on version 4.10 and earlier, see the section "Installing the Kernel Module Management Operator on earlier versions of OpenShift Container Platform". 4.2.1. Installing the Kernel Module Management Operator using the web console As a cluster administrator, you can install the Kernel Module Management (KMM) Operator using the OpenShift Container Platform web console. Procedure Log in to the OpenShift Container Platform web console. Install the Kernel Module Management Operator: In the OpenShift Container Platform web console, click Operators OperatorHub . Select Kernel Module Management Operator from the list of available Operators, and then click Install . From the Installed Namespace list, select the openshift-kmm namespace. Click Install . Verification To verify that KMM Operator installed successfully: Navigate to the Operators Installed Operators page. Ensure that Kernel Module Management Operator is listed in the openshift-kmm project with a Status of InstallSucceeded . Note During installation, an Operator might display a Failed status. If the installation later succeeds with an InstallSucceeded message, you can ignore the Failed message. Troubleshooting To troubleshoot issues with Operator installation: Navigate to the Operators Installed Operators page and inspect the Operator Subscriptions and Install Plans tabs for any failure or errors under Status . Navigate to the Workloads Pods page and check the logs for pods in the openshift-kmm project. 4.2.2. Installing the Kernel Module Management Operator by using the CLI As a cluster administrator, you can install the Kernel Module Management (KMM) Operator by using the OpenShift CLI. Prerequisites You have a running OpenShift Container Platform cluster. You installed the OpenShift CLI ( oc ). You are logged into the OpenShift CLI as a user with cluster-admin privileges. Procedure Install KMM in the openshift-kmm namespace: Create the following Namespace CR and save the YAML file, for example, kmm-namespace.yaml : apiVersion: v1 kind: Namespace metadata: name: openshift-kmm Create the following OperatorGroup CR and save the YAML file, for example, kmm-op-group.yaml : apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: kernel-module-management namespace: openshift-kmm Create the following Subscription CR and save the YAML file, for example, kmm-sub.yaml : apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: kernel-module-management namespace: openshift-kmm spec: channel: release-1.0 installPlanApproval: Automatic name: kernel-module-management source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: kernel-module-management.v1.0.0 Create the subscription object by running the following command: USD oc create -f kmm-sub.yaml Verification To verify that the Operator deployment is successful, run the following command: USD oc get -n openshift-kmm deployments.apps kmm-operator-controller Example output NAME READY UP-TO-DATE AVAILABLE AGE kmm-operator-controller 1/1 1 1 97s The Operator is available. 4.2.3. Installing the Kernel Module Management Operator on earlier versions of OpenShift Container Platform The KMM Operator is supported on OpenShift Container Platform 4.12 and later. For version 4.10 and earlier, you must create a new SecurityContextConstraint object and bind it to the Operator's ServiceAccount . As a cluster administrator, you can install the Kernel Module Management (KMM) Operator by using the OpenShift CLI. Prerequisites You have a running OpenShift Container Platform cluster. You installed the OpenShift CLI ( oc ). You are logged into the OpenShift CLI as a user with cluster-admin privileges. Procedure Install KMM in the openshift-kmm namespace: Create the following Namespace CR and save the YAML file, for example, kmm-namespace.yaml file: apiVersion: v1 kind: Namespace metadata: name: openshift-kmm Create the following SecurityContextConstraint object and save the YAML file, for example, kmm-security-constraint.yaml : allowHostDirVolumePlugin: false allowHostIPC: false allowHostNetwork: false allowHostPID: false allowHostPorts: false allowPrivilegeEscalation: false allowPrivilegedContainer: false allowedCapabilities: - NET_BIND_SERVICE apiVersion: security.openshift.io/v1 defaultAddCapabilities: null fsGroup: type: MustRunAs groups: [] kind: SecurityContextConstraints metadata: name: restricted-v2 priority: null readOnlyRootFilesystem: false requiredDropCapabilities: - ALL runAsUser: type: MustRunAsRange seLinuxContext: type: MustRunAs seccompProfiles: - runtime/default supplementalGroups: type: RunAsAny users: [] volumes: - configMap - downwardAPI - emptyDir - persistentVolumeClaim - projected - secret Bind the SecurityContextConstraint object to the Operator's ServiceAccount by running the following commands: USD oc apply -f kmm-security-constraint.yaml USD oc adm policy add-scc-to-user kmm-security-constraint -z kmm-operator-controller -n openshift-kmm Create the following OperatorGroup CR and save the YAML file, for example, kmm-op-group.yaml : apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: kernel-module-management namespace: openshift-kmm Create the following Subscription CR and save the YAML file, for example, kmm-sub.yaml : apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: kernel-module-management namespace: openshift-kmm spec: channel: release-1.0 installPlanApproval: Automatic name: kernel-module-management source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: kernel-module-management.v1.0.0 Create the subscription object by running the following command: USD oc create -f kmm-sub.yaml Verification To verify that the Operator deployment is successful, run the following command: USD oc get -n openshift-kmm deployments.apps kmm-operator-controller Example output NAME READY UP-TO-DATE AVAILABLE AGE kmm-operator-controller 1/1 1 1 97s The Operator is available. 4.3. Configuring the Kernel Module Management Operator In most cases, the default configuration for the Kernel Module Management (KMM) Operator does not need to be modified. However, you can modify the Operator settings to suit your environment using the following procedure. The Operator configuration is set in the kmm-operator-manager-config ConfigMap in the Operator namespace. Procedure To modify the settings, edit the ConfigMap data by entering the following command: USD oc edit configmap -n "USDnamespace" kmm-operator-manager-config Example output healthProbeBindAddress: :8081 job: gcDelay: 1h leaderElection: enabled: true resourceID: kmm.sigs.x-k8s.io webhook: disableHTTP2: true # CVE-2023-44487 port: 9443 metrics: enableAuthnAuthz: true disableHTTP2: true # CVE-2023-44487 bindAddress: 0.0.0.0:8443 secureServing: true worker: runAsUser: 0 seLinuxType: spc_t setFirmwareClassPath: /var/lib/firmware Table 4.1. Operator configuration parameters Parameter Description healthProbeBindAddress Defines the address on which the Operator monitors for kubelet health probes. The recommended value is :8081 . job.gcDelay Defines the duration that successful build pods should be preserved for before they are deleted. There is no recommended value for this setting. For information about the valid values for this setting, see ParseDuration . leaderElection.enabled Determines whether leader election is used to ensure that only one replica of the KMM Operator is running at any time. For more information, see Leases . The recommended value is true . leaderElection.resourceID Determines the name of the resource that leader election uses for holding the leader lock. The recommended value is kmm.sigs.x-k8s.io . webhook.disableHTTP2 If true , disables HTTP/2 for the webhook server, as a mitigation for cve-2023-44487 . The recommended value is true . webhook.port Defines the port on which the Operator monitors webhook requests. The recommended value is 9443 . metrics.enableAuthnAuthz Determines if metrics are authenticated using TokenReviews and authorized using SubjectAccessReviews with the kube-apiserver. For authentication and authorization, the controller needs a ClusterRole with the following rules: apiGroups: authentication.k8s.io, resources: tokenreviews, verbs: create apiGroups: authorization.k8s.io, resources: subjectaccessreviews, verbs: create To scrape metrics, for example, using Prometheus, the client needs a ClusterRole with the following rule: nonResourceURLs: "/metrics", verbs: get The recommended value is true . metrics.disableHTTP2 If true , disables HTTP/2 for the metrics server as a mitigation for CVE-2023-44487 . The recommended value is true . metrics.bindAddress Determines the bind address for the metrics server. If unspecified, the default is :8080 . To disable the metrics server, set to 0 . The recommended value is 0.0.0.0:8443 . metrics.secureServing Determines whether the metrics are served over HTTPS instead of HTTP. The recommended value is true . worker.runAsUser Determines the value of the runAsUser field of the worker container's security context. For more information, see SecurityContext . The recommended value is 9443 . worker.seLinuxType Determines the value of the seLinuxOptions.type field of the worker container's security context. For more information, see SecurityContext . The recommended value is spc_t . worker.setFirmwareClassPath Sets the kernel's firmware search path into the /sys/module/firmware_class/parameters/path file on the node. The recommended value is /var/lib/firmware if you need to set that value through the worker app. Otherwise, unset. After modifying the settings, restart the controller with the following command: USD oc delete pod -n "<namespace>" -l app.kubernetes.io/component=kmm Note The value of <namespace> depends on your original installation method. Additional resources For more information, see Installing the Kernel Module Management Operator . 4.3.1. Unloading the kernel module You must unload the kernel modules when moving to a newer version or if they introduce some undesirable side effect on the node. Procedure To unload a module loaded with KMM from nodes, delete the corresponding Module resource. KMM then creates worker pods, where required, to run modprobe -r and unload the kernel module from the nodes. Warning When unloading worker pods, KMM needs all the resources it uses when loading the kernel module. This includes the ServiceAccount referenced in the Module as well as any RBAC defined to allow privileged KMM worker Pods to run. It also includes any pull secret referenced in .spec.imageRepoSecret . To avoid situations where KMM is unable to unload the kernel module from nodes, make sure those resources are not deleted while the Module resource is still present in the cluster in any state, including Terminating . KMM includes a validating admission webhook that rejects the deletion of namespaces that contain at least one Module resource. 4.3.2. Setting the kernel firmware search path The Linux kernel accepts the firmware_class.path parameter as a search path for firmware, as explained in Firmware search paths . KMM worker pods can set this value on nodes by writing to sysfs before attempting to load kmods. Procedure To define a firmware search path, set worker.setFirmwareClassPath to /var/lib/firmware in the Operator configuration. Additional resources For more information about the worker.setFirmwareClassPath path, see Configuring the Kernel Module Management Operator . 4.4. Uninstalling the Kernel Module Management Operator Use one of the following procedures to uninstall the Kernel Module Management (KMM) Operator, depending on how the KMM Operator was installed. 4.4.1. Uninstalling a Red Hat catalog installation Use this procedure if KMM was installed from the Red Hat catalog. Procedure Use the following method to uninstall the KMM Operator: Use the OpenShift console under Operators --> Installed Operators to locate and uninstall the Operator. Note Alternatively, you can delete the Subscription resource in the KMM namespace. 4.4.2. Uninstalling a CLI installation Use this command if the KMM Operator was installed using the OpenShift CLI. Procedure Run the following command to uninstall the KMM Operator: USD oc delete -k https://github.com/rh-ecosystem-edge/kernel-module-management/config/default Note Using this command deletes the Module CRD and all Module instances in the cluster. 4.5. Kernel module deployment Kernel Module Management (KMM) monitors Node and Module resources in the cluster to determine if a kernel module should be loaded on or unloaded from a node. To be eligible for a module, a node must contain the following: Labels that match the module's .spec.selector field. A kernel version matching one of the items in the module's .spec.moduleLoader.container.kernelMappings field. If ordered upgrade ( ordered_upgrade.md ) is configured in the module, a label that matches its .spec.moduleLoader.container.version field. When KMM reconciles nodes with the desired state as configured in the Module resource, it creates worker pods on the target nodes to run the necessary action. The KMM Operator monitors the outcome of the pods and records the information. The Operator uses this information to label the Node objects when the module is successfully loaded, and to run the device plugin, if configured. Worker pods run the KMM worker binary that performs the following tasks: Pulls the kmod image configured in the Module resource. Kmod images are standard OCI images that contain .ko files. Extracts the image in the pod's filesystem. Runs modprobe with the specified arguments to perform the necessary action. 4.5.1. The Module custom resource definition The Module custom resource definition (CRD) represents a kernel module that can be loaded on all or select nodes in the cluster, through a kmod image. A Module custom resource (CR) specifies one or more kernel versions with which it is compatible, and a node selector. The compatible versions for a Module resource are listed under .spec.moduleLoader.container.kernelMappings . A kernel mapping can either match a literal version, or use regexp to match many of them at the same time. The reconciliation loop for the Module resource runs the following steps: List all nodes matching .spec.selector . Build a set of all kernel versions running on those nodes. For each kernel version: Go through .spec.moduleLoader.container.kernelMappings and find the appropriate container image name. If the kernel mapping has build or sign defined and the container image does not already exist, run the build, the signing pod, or both, as needed. Create a worker pod to pull the container image determined in the step and run modprobe . If .spec.devicePlugin is defined, create a device plugin daemon set using the configuration specified under .spec.devicePlugin.container . Run garbage-collect on: Obsolete device plugin DaemonSets that do not target any node. Successful build pods. Successful signing pods. 4.5.2. Set soft dependencies between kernel modules Some configurations require that several kernel modules be loaded in a specific order to work properly, even though the modules do not directly depend on each other through symbols. These are called soft dependencies. depmod is usually not aware of these dependencies, and they do not appear in the files it produces. For example, if mod_a has a soft dependency on mod_b , modprobe mod_a will not load mod_b . You can resolve these situations by declaring soft dependencies in the Module custom resource definition (CRD) using the modulesLoadingOrder field. # ... spec: moduleLoader: container: modprobe: moduleName: mod_a dirName: /opt firmwarePath: /firmware parameters: - param=1 modulesLoadingOrder: - mod_a - mod_b In the configuration above, the worker pod will first try to unload the in-tree mod_b before loading mod_a from the kmod image. When the worker pod is terminated and mod_a is unloaded, mod_b will not be loaded again. Note The first value in the list, to be loaded last, must be equivalent to the moduleName . 4.6. Security and permissions Important Loading kernel modules is a highly sensitive operation. After they are loaded, kernel modules have all possible permissions to do any kind of operation on the node. 4.6.1. ServiceAccounts and SecurityContextConstraints Kernel Module Management (KMM) creates a privileged workload to load the kernel modules on nodes. That workload needs ServiceAccounts allowed to use the privileged SecurityContextConstraint (SCC) resource. The authorization model for that workload depends on the namespace of the Module resource, as well as its spec. If the .spec.moduleLoader.serviceAccountName or .spec.devicePlugin.serviceAccountName fields are set, they are always used. If those fields are not set, then: If the Module resource is created in the Operator's namespace ( openshift-kmm by default), then KMM uses its default, powerful ServiceAccounts to run the worker and device plugin pods. If the Module resource is created in any other namespace, then KMM runs the pods with the namespace's default ServiceAccount . The Module resource cannot run a privileged workload unless you manually enable it to use the privileged SCC. Important openshift-kmm is a trusted namespace. When setting up RBAC permissions, remember that any user or ServiceAccount creating a Module resource in the openshift-kmm namespace results in KMM automatically running privileged workloads on potentially all nodes in the cluster. To allow any ServiceAccount to use the privileged SCC and run worker or device plugin pods, you can use the oc adm policy command, as in the following example: USD oc adm policy add-scc-to-user privileged -z "USD{serviceAccountName}" [ -n "USD{namespace}" ] 4.6.2. Pod security standards OpenShift runs a synchronization mechanism that sets the namespace Pod Security level automatically based on the security contexts in use. No action is needed. Additional resources Understanding and managing pod security admission 4.7. Replacing in-tree modules with out-of-tree modules You can use Kernel Module Management (KMM) to build kernel modules that can be loaded or unloaded into the kernel on demand. These modules extend the functionality of the kernel without the need to reboot the system. Modules can be configured as built-in or dynamically loaded. Dynamically loaded modules include in-tree modules and out-of-tree (OOT) modules. In-tree modules are internal to the Linux kernel tree, that is, they are already part of the kernel. Out-of-tree modules are external to the Linux kernel tree. They are generally written for development and testing purposes, such as testing the new version of a kernel module that is shipped in-tree, or to deal with incompatibilities. Some modules that are loaded by KMM could replace in-tree modules that are already loaded on the node. To unload in-tree modules before loading your module, set the value of the .spec.moduleLoader.container.inTreeModulesToRemove field to the modules that you want to unload. The following example demonstrates module replacement for all kernel mappings: # ... spec: moduleLoader: container: modprobe: moduleName: mod_a inTreeModulesToRemove: [mod_a, mod_b] In this example, the moduleLoader pod uses inTreeModulesToRemove to unload the in-tree mod_a and mod_b before loading mod_a from the moduleLoader image. When the moduleLoader`pod is terminated and `mod_a is unloaded, mod_b is not loaded again. The following is an example for module replacement for specific kernel mappings: # ... spec: moduleLoader: container: kernelMappings: - literal: 6.0.15-300.fc37.x86_64 containerImage: "some.registry/org/my-kmod:USD{KERNEL_FULL_VERSION}" inTreeModulesToRemove: [<module_name>, <module_name>] Additional resources Building a linux kernel module 4.7.1. Example Module CR The following is an annotated Module example: apiVersion: kmm.sigs.x-k8s.io/v1beta1 kind: Module metadata: name: <my_kmod> spec: moduleLoader: container: modprobe: moduleName: <my_kmod> 1 dirName: /opt 2 firmwarePath: /firmware 3 parameters: 4 - param=1 kernelMappings: 5 - literal: 6.0.15-300.fc37.x86_64 containerImage: some.registry/org/my-kmod:6.0.15-300.fc37.x86_64 - regexp: '^.+\fc37\.x86_64USD' 6 containerImage: "some.other.registry/org/<my_kmod>:USD{KERNEL_FULL_VERSION}" - regexp: '^.+USD' 7 containerImage: "some.registry/org/<my_kmod>:USD{KERNEL_FULL_VERSION}" build: buildArgs: 8 - name: ARG_NAME value: <some_value> secrets: - name: <some_kubernetes_secret> 9 baseImageRegistryTLS: 10 insecure: false insecureSkipTLSVerify: false 11 dockerfileConfigMap: 12 name: <my_kmod_dockerfile> sign: certSecret: name: <cert_secret> 13 keySecret: name: <key_secret> 14 filesToSign: - /opt/lib/modules/USD{KERNEL_FULL_VERSION}/<my_kmod>.ko registryTLS: 15 insecure: false 16 insecureSkipTLSVerify: false serviceAccountName: <sa_module_loader> 17 devicePlugin: 18 container: image: some.registry/org/device-plugin:latest 19 env: - name: MY_DEVICE_PLUGIN_ENV_VAR value: SOME_VALUE volumeMounts: 20 - mountPath: /some/mountPath name: <device_plugin_volume> volumes: 21 - name: <device_plugin_volume> configMap: name: <some_configmap> serviceAccountName: <sa_device_plugin> 22 imageRepoSecret: 23 name: <secret_name> selector: node-role.kubernetes.io/worker: "" 1 1 1 Required. 2 Optional. 3 Optional: Copies /firmware/* into /var/lib/firmware/ on the node. 4 Optional. 5 At least one kernel item is required. 6 For each node running a kernel matching the regular expression, KMM creates a DaemonSet resource running the image specified in containerImage with USD{KERNEL_FULL_VERSION} replaced with the kernel version. 7 For any other kernel, build the image using the Dockerfile in the my-kmod ConfigMap. 8 Optional. 9 Optional: A value for some-kubernetes-secret can be obtained from the build environment at /run/secrets/some-kubernetes-secret . 10 This field has no effect. When building kmod images or signing kmods within a kmod image, you might sometimes need to pull base images from a registry that serves a certificate signed by an untrusted Certificate Authority (CA). In order for KMM to trust that CA, it must also trust the new CA by replacing the cluster's CA bundle. See "Additional resources" to learn how to replace the cluster's CA bundle. 11 Optional: Avoid using this parameter. If set to true , the build will skip any TLS server certificate validation when pulling the image in the Dockerfile FROM instruction using plain HTTP. 12 Required. 13 Required: A secret holding the public secureboot key with the key 'cert'. 14 Required: A secret holding the private secureboot key with the key 'key'. 15 Optional: Avoid using this parameter. If set to true , KMM will be allowed to check if the container image already exists using plain HTTP. 16 Optional: Avoid using this parameter. If set to true , KMM will skip any TLS server certificate validation when checking if the container image already exists. 17 Optional. 18 Optional. 19 Required: If the device plugin section is present. 20 Optional. 21 Optional. 22 Optional. 23 Optional: Used to pull module loader and device plugin images. Additional resources Replacing the CA Bundle certificate 4.8. Symbolic links for in-tree dependencies Some kernel modules depend on other kernel modules that are shipped with the node's operating system. To avoid copying those dependencies into the kmod image, Kernel Module Management (KMM) mounts /usr/lib/modules into both the build and the worker pod's filesystems. By creating a symlink from /opt/usr/lib/modules/<kernel_version>/<symlink_name> to /usr/lib/modules/<kernel_version> , depmod can use the in-tree kmods on the building node's filesystem to resolve dependencies. At runtime, the worker pod extracts the entire image, including the <symlink_name> symbolic link. That symbolic link points to /usr/lib/modules/<kernel_version> in the worker pod, which is mounted from the node's filesystem. modprobe can then follow that link and load the in-tree dependencies as needed. In the following example, host is the symbolic link name under /opt/usr/lib/modules/<kernel_version> : ARG DTK_AUTO FROM USD{DTK_AUTO} as builder # # Build steps # FROM ubi9/ubi ARG KERNEL_FULL_VERSION RUN dnf update && dnf install -y kmod COPY --from=builder /usr/src/kernel-module-management/ci/kmm-kmod/kmm_ci_a.ko /opt/lib/modules/USD{KERNEL_FULL_VERSION}/ COPY --from=builder /usr/src/kernel-module-management/ci/kmm-kmod/kmm_ci_b.ko /opt/lib/modules/USD{KERNEL_FULL_VERSION}/ # Create the symbolic link RUN ln -s /lib/modules/USD{KERNEL_FULL_VERSION} /opt/lib/modules/USD{KERNEL_FULL_VERSION}/host RUN depmod -b /opt USD{KERNEL_FULL_VERSION} Note depmod generates dependency files based on the kernel modules present on the node that runs the kmod image build. On the node on which KMM loads the kernel modules, modprobe expects the files to be present under /usr/lib/modules/<kernel_version> , and the same filesystem layout. It is highly recommended that the build and the target nodes share the same operating system and release. 4.9. Creating a kmod image Kernel Module Management (KMM) works with purpose-built kmod images, which are standard OCI images that contain .ko files. The location of the .ko files must match the following pattern: <prefix>/lib/modules/[kernel-version]/ . Keep the following in mind when working with the .ko files: In most cases, <prefix> should be equal to /opt . This is the Module CRD's default value. kernel-version must not be empty and must be equal to the kernel version the kernel modules were built for. 4.9.1. Running depmod It is recommended to run depmod at the end of the build process to generate modules.dep and .map files. This is especially useful if your kmod image contains several kernel modules and if one of the modules depends on another module. Note You must have a Red Hat subscription to download the kernel-devel package. Procedure Generate modules.dep and .map files for a specific kernel version by running the following command: USD depmod -b /opt USD{KERNEL_FULL_VERSION}+`. 4.9.1.1. Example Dockerfile If you are building your image on OpenShift Container Platform, consider using the Driver Tool Kit (DTK). For further information, see using an entitled build . apiVersion: v1 kind: ConfigMap metadata: name: kmm-ci-dockerfile data: dockerfile: | ARG DTK_AUTO FROM USD{DTK_AUTO} as builder ARG KERNEL_FULL_VERSION WORKDIR /usr/src RUN ["git", "clone", "https://github.com/rh-ecosystem-edge/kernel-module-management.git"] WORKDIR /usr/src/kernel-module-management/ci/kmm-kmod RUN KERNEL_SRC_DIR=/lib/modules/USD{KERNEL_FULL_VERSION}/build make all FROM registry.redhat.io/ubi9/ubi-minimal ARG KERNEL_FULL_VERSION RUN microdnf install kmod COPY --from=builder /usr/src/kernel-module-management/ci/kmm-kmod/kmm_ci_a.ko /opt/lib/modules/USD{KERNEL_FULL_VERSION}/ COPY --from=builder /usr/src/kernel-module-management/ci/kmm-kmod/kmm_ci_b.ko /opt/lib/modules/USD{KERNEL_FULL_VERSION}/ RUN depmod -b /opt USD{KERNEL_FULL_VERSION} Additional resources Driver Toolkit 4.9.2. Building in the cluster KMM can build kmod images in the cluster. Follow these guidelines: Provide build instructions using the build section of a kernel mapping. Copy the Dockerfile for your container image into a ConfigMap resource, under the dockerfile key. Ensure that the ConfigMap is located in the same namespace as the Module . KMM checks if the image name specified in the containerImage field exists. If it does, the build is skipped. Otherwise, KMM creates a Build resource to build your image. After the image is built, KMM proceeds with the Module reconciliation. See the following example. # ... - regexp: '^.+USD' containerImage: "some.registry/org/<my_kmod>:USD{KERNEL_FULL_VERSION}" build: buildArgs: 1 - name: ARG_NAME value: <some_value> secrets: 2 - name: <some_kubernetes_secret> 3 baseImageRegistryTLS: insecure: false 4 insecureSkipTLSVerify: false 5 dockerfileConfigMap: 6 name: <my_kmod_dockerfile> registryTLS: insecure: false 7 insecureSkipTLSVerify: false 8 1 Optional. 2 Optional. 3 Will be mounted in the build pod as /run/secrets/some-kubernetes-secret . 4 Optional: Avoid using this parameter. If set to true , the build will be allowed to pull the image in the Dockerfile FROM instruction using plain HTTP. 5 Optional: Avoid using this parameter. If set to true , the build will skip any TLS server certificate validation when pulling the image in the Dockerfile FROM instruction using plain HTTP. 6 Required. 7 Optional: Avoid using this parameter. If set to true , KMM will be allowed to check if the container image already exists using plain HTTP. 8 Optional: Avoid using this parameter. If set to true , KMM will skip any TLS server certificate validation when checking if the container image already exists. Successful build pods are garbage collected immediately, unless the job.gcDelay parameter is set in the Operator configuration. Failed build pods are always preserved and must be deleted manually by the administrator for the build to be restarted. Additional resources Build configuration resources Preflight validation for Kernel Module Management (KMM) Modules 4.9.3. Using the Driver Toolkit The Driver Toolkit (DTK) is a convenient base image for building build kmod loader images. It contains tools and libraries for the OpenShift version currently running in the cluster. Procedure Use DTK as the first stage of a multi-stage Dockerfile . Build the kernel modules. Copy the .ko files into a smaller end-user image such as ubi-minimal . To leverage DTK in your in-cluster build, use the DTK_AUTO build argument. The value is automatically set by KMM when creating the Build resource. See the following example. ARG DTK_AUTO FROM USD{DTK_AUTO} as builder ARG KERNEL_FULL_VERSION WORKDIR /usr/src RUN ["git", "clone", "https://github.com/rh-ecosystem-edge/kernel-module-management.git"] WORKDIR /usr/src/kernel-module-management/ci/kmm-kmod RUN KERNEL_SRC_DIR=/lib/modules/USD{KERNEL_FULL_VERSION}/build make all FROM ubi9/ubi-minimal ARG KERNEL_FULL_VERSION RUN microdnf install kmod COPY --from=builder /usr/src/kernel-module-management/ci/kmm-kmod/kmm_ci_a.ko /opt/lib/modules/USD{KERNEL_FULL_VERSION}/ COPY --from=builder /usr/src/kernel-module-management/ci/kmm-kmod/kmm_ci_b.ko /opt/lib/modules/USD{KERNEL_FULL_VERSION}/ RUN depmod -b /opt USD{KERNEL_FULL_VERSION} Additional resources Driver Toolkit 4.10. Using signing with Kernel Module Management (KMM) On a Secure Boot enabled system, all kernel modules (kmods) must be signed with a public/private key-pair enrolled into the Machine Owner's Key (MOK) database. Drivers distributed as part of a distribution should already be signed by the distribution's private key, but for kernel modules build out-of-tree, KMM supports signing kernel modules using the sign section of the kernel mapping. For more details on using Secure Boot, see Generating a public and private key pair Prerequisites A public private key pair in the correct (DER) format. At least one secure-boot enabled node with the public key enrolled in its MOK database. Either a pre-built driver container image, or the source code and Dockerfile needed to build one in-cluster. 4.11. Adding the keys for secureboot To use KMM Kernel Module Management (KMM) to sign kernel modules, a certificate and private key are required. For details on how to create these, see Generating a public and private key pair . For details on how to extract the public and private key pair, see Signing kernel modules with the private key . Use steps 1 through 4 to extract the keys into files. Procedure Create the sb_cert.cer file that contains the certificate and the sb_cert.priv file that contains the private key: USD openssl req -x509 -new -nodes -utf8 -sha256 -days 36500 -batch -config configuration_file.config -outform DER -out my_signing_key_pub.der -keyout my_signing_key.priv Add the files by using one of the following methods: Add the files as secrets directly: USD oc create secret generic my-signing-key --from-file=key=<my_signing_key.priv> USD oc create secret generic my-signing-key-pub --from-file=cert=<my_signing_key_pub.der> Add the files by base64 encoding them: USD cat sb_cert.priv | base64 -w 0 > my_signing_key2.base64 USD cat sb_cert.cer | base64 -w 0 > my_signing_key_pub.base64 Add the encoded text to a YAML file: apiVersion: v1 kind: Secret metadata: name: my-signing-key-pub namespace: default 1 type: Opaque data: cert: <base64_encoded_secureboot_public_key> --- apiVersion: v1 kind: Secret metadata: name: my-signing-key namespace: default 2 type: Opaque data: key: <base64_encoded_secureboot_private_key> 1 2 namespace - Replace default with a valid namespace. Apply the YAML file: USD oc apply -f <yaml_filename> 4.11.1. Checking the keys After you have added the keys, you must check them to ensure they are set correctly. Procedure Check to ensure the public key secret is set correctly: USD oc get secret -o yaml <certificate secret name> | awk '/cert/{print USD2; exit}' | base64 -d | openssl x509 -inform der -text This should display a certificate with a Serial Number, Issuer, Subject, and more. Check to ensure the private key secret is set correctly: USD oc get secret -o yaml <private key secret name> | awk '/key/{print USD2; exit}' | base64 -d This should display the key enclosed in the -----BEGIN PRIVATE KEY----- and -----END PRIVATE KEY----- lines. 4.12. Signing kmods in a pre-built image Use this procedure if you have a pre-built image, such as an image either distributed by a hardware vendor or built elsewhere. The following YAML file adds the public/private key-pair as secrets with the required key names - key for the private key, cert for the public key. The cluster then pulls down the unsignedImage image, opens it, signs the kernel modules listed in filesToSign , adds them back, and pushes the resulting image as containerImage . KMM then loads the signed kmods onto all the nodes with that match the selector. The kmods are successfully loaded on any nodes that have the public key in their MOK database, and any nodes that are not secure-boot enabled, which will ignore the signature. Prerequisites The keySecret and certSecret secrets have been created in the same namespace as the rest of the resources. Procedure Apply the YAML file: --- apiVersion: kmm.sigs.x-k8s.io/v1beta1 kind: Module metadata: name: example-module spec: moduleLoader: serviceAccountName: default container: modprobe: 1 moduleName: '<module_name>' kernelMappings: # the kmods will be deployed on all nodes in the cluster with a kernel that matches the regexp - regexp: '^.*\.x86_64USD' # the container to produce containing the signed kmods containerImage: <image_name> 2 sign: # the image containing the unsigned kmods (we need this because we are not building the kmods within the cluster) unsignedImage: <image_name> 3 keySecret: # a secret holding the private secureboot key with the key 'key' name: <private_key_secret_name> certSecret: # a secret holding the public secureboot key with the key 'cert' name: <certificate_secret_name> filesToSign: # full path within the unsignedImage container to the kmod(s) to sign - /opt/lib/modules/4.18.0-348.2.1.el8_5.x86_64/kmm_ci_a.ko imageRepoSecret: # the name of a secret containing credentials to pull unsignedImage and push containerImage to the registry name: repo-pull-secret selector: kubernetes.io/arch: amd64 1 The name of the kmod to load. 2 The name of the container image. For example, quay.io/myuser/my-driver:<kernelversion . 3 The name of the unsigned image. For example, quay.io/myuser/my-driver:<kernelversion . 4.13. Building and signing a kmod image Use this procedure if you have source code and must build your image first. The following YAML file builds a new container image using the source code from the repository. The image produced is saved back in the registry with a temporary name, and this temporary image is then signed using the parameters in the sign section. The temporary image name is based on the final image name and is set to be <containerImage>:<tag>-<namespace>_<module name>_kmm_unsigned . For example, using the following YAML file, Kernel Module Management (KMM) builds an image named example.org/repository/minimal-driver:final-default_example-module_kmm_unsigned containing the build with unsigned kmods and pushes it to the registry. Then it creates a second image named example.org/repository/minimal-driver:final that contains the signed kmods. It is this second image that is pulled by the worker pods and contains the kmods to be loaded on the cluster nodes. After it is signed, you can safely delete the temporary image from the registry. It will be rebuilt, if needed. Prerequisites The keySecret and certSecret secrets have been created in the same namespace as the rest of the resources. Procedure Apply the YAML file: --- apiVersion: v1 kind: ConfigMap metadata: name: example-module-dockerfile namespace: <namespace> 1 data: Dockerfile: | ARG DTK_AUTO ARG KERNEL_VERSION FROM USD{DTK_AUTO} as builder WORKDIR /build/ RUN git clone -b main --single-branch https://github.com/rh-ecosystem-edge/kernel-module-management.git WORKDIR kernel-module-management/ci/kmm-kmod/ RUN make FROM registry.access.redhat.com/ubi9/ubi:latest ARG KERNEL_VERSION RUN yum -y install kmod && yum clean all RUN mkdir -p /opt/lib/modules/USD{KERNEL_VERSION} COPY --from=builder /build/kernel-module-management/ci/kmm-kmod/*.ko /opt/lib/modules/USD{KERNEL_VERSION}/ RUN /usr/sbin/depmod -b /opt --- apiVersion: kmm.sigs.x-k8s.io/v1beta1 kind: Module metadata: name: example-module namespace: <namespace> 2 spec: moduleLoader: serviceAccountName: default 3 container: modprobe: moduleName: simple_kmod kernelMappings: - regexp: '^.*\.x86_64USD' containerImage: <final_driver_container_name> build: dockerfileConfigMap: name: example-module-dockerfile sign: keySecret: name: <private_key_secret_name> certSecret: name: <certificate_secret_name> filesToSign: - /opt/lib/modules/4.18.0-348.2.1.el8_5.x86_64/kmm_ci_a.ko imageRepoSecret: 4 name: repo-pull-secret selector: # top-level selector kubernetes.io/arch: amd64 1 2 Replace default with a valid namespace. 3 The default serviceAccountName does not have the required permissions to run a module that is privileged. For information on creating a service account, see "Creating service accounts" in the "Additional resources" of this section. 4 Used as imagePullSecrets in the DaemonSet object and to pull and push for the build and sign features. Additional resources Creating service accounts . 4.14. KMM hub and spoke In hub and spoke scenarios, many spoke clusters are connected to a central, powerful hub cluster. Kernel Module Management (KMM) depends on Red Hat Advanced Cluster Management (RHACM) to operate in hub and spoke environments. KMM is compatible with hub and spoke environments through decoupling KMM features. A ManagedClusterModule custom resource definition (CRD) is provided to wrap the existing Module CRD and extend it to select Spoke clusters. Also provided is KMM-Hub, a new standalone controller that builds images and signs modules on the hub cluster. In hub and spoke setups, spokes are focused, resource-constrained clusters that are centrally managed by a hub cluster. Spokes run the single-cluster edition of KMM, with those resource-intensive features disabled. To adapt KMM to this environment, you should reduce the workload running on the spokes to the minimum, while the hub takes care of the expensive tasks. Building kernel module images and signing the .ko files, should run on the hub. The scheduling of the Module Loader and Device Plugin DaemonSets can only happen on the spokes. Additional resources Red Hat Advanced Cluster Management (RHACM) 4.14.1. KMM-Hub The KMM project provides KMM-Hub, an edition of KMM dedicated to hub clusters. KMM-Hub monitors all kernel versions running on the spokes and determines the nodes on the cluster that should receive a kernel module. KMM-Hub runs all compute-intensive tasks such as image builds and kmod signing, and prepares the trimmed-down Module to be transferred to the spokes through RHACM. Note KMM-Hub cannot be used to load kernel modules on the hub cluster. Install the regular edition of KMM to load kernel modules. Additional resources Installing KMM 4.14.2. Installing KMM-Hub You can use one of the following methods to install KMM-Hub: With the Operator Lifecycle Manager (OLM) Creating KMM resources Additional resources KMM Operator bundle 4.14.2.1. Installing KMM-Hub using the Operator Lifecycle Manager Use the Operators section of the OpenShift console to install KMM-Hub. 4.14.2.2. Installing KMM-Hub by creating KMM resources Procedure If you want to install KMM-Hub programmatically, you can use the following resources to create the Namespace , OperatorGroup and Subscription resources: --- apiVersion: v1 kind: Namespace metadata: name: openshift-kmm-hub --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: kernel-module-management-hub namespace: openshift-kmm-hub --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: kernel-module-management-hub namespace: openshift-kmm-hub spec: channel: stable installPlanApproval: Automatic name: kernel-module-management-hub source: redhat-operators sourceNamespace: openshift-marketplace 4.14.3. Using the ManagedClusterModule CRD Use the ManagedClusterModule Custom Resource Definition (CRD) to configure the deployment of kernel modules on spoke clusters. This CRD is cluster-scoped, wraps a Module spec and adds the following additional fields: apiVersion: hub.kmm.sigs.x-k8s.io/v1beta1 kind: ManagedClusterModule metadata: name: <my-mcm> # No namespace, because this resource is cluster-scoped. spec: moduleSpec: 1 selector: 2 node-wants-my-mcm: 'true' spokeNamespace: <some-namespace> 3 selector: 4 wants-my-mcm: 'true' 1 moduleSpec : Contains moduleLoader and devicePlugin sections, similar to a Module resource. 2 Selects nodes within the ManagedCluster . 3 Specifies in which namespace the Module should be created. 4 Selects ManagedCluster objects. If build or signing instructions are present in .spec.moduleSpec , those pods are run on the hub cluster in the operator's namespace. When the .spec.selector matches one or more ManagedCluster resources, then KMM-Hub creates a ManifestWork resource in the corresponding namespace(s). ManifestWork contains a trimmed-down Module resource, with kernel mappings preserved but all build and sign subsections are removed. containerImage fields that contain image names ending with a tag are replaced with their digest equivalent. 4.14.4. Running KMM on the spoke After installing Kernel Module Management (KMM) on the spoke, no further action is required. Create a ManagedClusterModule object from the hub to deploy kernel modules on spoke clusters. Procedure You can install KMM on the spokes cluster through a RHACM Policy object. In addition to installing KMM from the OperatorHub and running it in a lightweight spoke mode, the Policy configures additional RBAC required for the RHACM agent to be able to manage Module resources. Use the following RHACM policy to install KMM on spoke clusters: 1 This environment variable is required when running KMM on a spoke cluster. 2 The spec.clusterSelector field can be customized to target select clusters only. 4.15. Customizing upgrades for kernel modules Use this procedure to upgrade the kernel module while running maintenance operations on the node, including rebooting the node, if needed. To minimize the impact on the workloads running in the cluster, run the kernel upgrade process sequentially, one node at a time. Note This procedure requires knowledge of the workload utilizing the kernel module and must be managed by the cluster administrator. Prerequisites Before upgrading, set the kmm.node.kubernetes.io/version-module.<module_namespace>.<module_name>=USDmoduleVersion label on all the nodes that are used by the kernel module. Terminate all user application workloads on the node or move them to another node. Unload the currently loaded kernel module. Ensure that the user workload (the application running in the cluster that is accessing kernel module) is not running on the node prior to kernel module unloading and that the workload is back running on the node after the new kernel module version has been loaded. Procedure Ensure that the device plugin managed by KMM on the node is unloaded. Update the following fields in the Module custom resource (CR): containerImage (to the appropriate kernel version) version The update should be atomic; that is, both the containerImage and version fields must be updated simultaneously. Terminate any workload using the kernel module on the node being upgraded. Remove the kmm.node.kubernetes.io/version-module.<module_namespace>.<module_name> label on the node. Run the following command to unload the kernel module from the node: USD oc label node/<node_name> kmm.node.kubernetes.io/version-module.<module_namespace>.<module_name>- If required, as the cluster administrator, perform any additional maintenance required on the node for the kernel module upgrade. If no additional upgrading is needed, you can skip Steps 3 through 6 by updating the kmm.node.kubernetes.io/version-module.<module_namespace>.<module_name> label value to the new USDmoduleVersion as set in the Module . Run the following command to add the kmm.node.kubernetes.io/version-module.<module_namespace>.<module_name>=USDmoduleVersion label to the node. The USDmoduleVersion must be equal to the new value of the version field in the Module CR. USD oc label node/<node_name> kmm.node.kubernetes.io/version-module.<module_namespace>.<module_name>=<desired_version> Note Because of Kubernetes limitations in label names, the combined length of Module name and namespace must not exceed 39 characters. Restore any workload that leverages the kernel module on the node. Reload the device plugin managed by KMM on the node. 4.16. Day 1 kernel module loading Kernel Module Management (KMM) is typically a Day 2 Operator. Kernel modules are loaded only after the complete initialization of a Linux (RHCOS) server. However, in some scenarios the kernel module must be loaded at an earlier stage. Day 1 functionality allows you to use the Machine Config Operator (MCO) to load kernel modules during the Linux systemd initialization stage. Additional resources Machine Config Operator 4.16.1. Day 1 supported use cases The Day 1 functionality supports a limited number of use cases. The main use case is to allow loading out-of-tree (OOT) kernel modules prior to NetworkManager service initialization. It does not support loading kernel module at the initramfs stage. The following are the conditions needed for Day 1 functionality: The kernel module is not loaded in the kernel. The in-tree kernel module is loaded into the kernel, but can be unloaded and replaced by the OOT kernel module. This means that the in-tree module is not referenced by any other kernel modules. In order for Day 1 functionlity to work, the node must have a functional network interface, that is, an in-tree kernel driver for that interface. The OOT kernel module can be a network driver that will replace the functional network driver. 4.16.2. OOT kernel module loading flow The loading of the out-of-tree (OOT) kernel module leverages the Machine Config Operator (MCO). The flow sequence is as follows: Procedure Apply a MachineConfig resource to the existing running cluster. In order to identify the necessary nodes that need to be updated, you must create an appropriate MachineConfigPool resource. MCO applies the reboots node by node. On any rebooted node, two new systemd services are deployed: pull service and load service. The load service is configured to run prior to the NetworkConfiguration service. The service tries to pull a predefined kernel module image and then, using that image, to unload an in-tree module and load an OOT kernel module. The pull service is configured to run after NetworkManager service. The service checks if the preconfigured kernel module image is located on the node's filesystem. If it is, the service exists normally, and the server continues with the boot process. If not, it pulls the image onto the node and reboots the node afterwards. 4.16.3. The kernel module image The Day 1 functionality uses the same DTK based image leveraged by Day 2 KMM builds. The out-of-tree kernel module should be located under /opt/lib/modules/USD{kernelVersion} . Additional resources Driver Toolkit 4.16.4. In-tree module replacement The Day 1 functionality always tries to replace the in-tree kernel module with the OOT version. If the in-tree kernel module is not loaded, the flow is not affected; the service proceeds and loads the OOT kernel module. 4.16.5. MCO yaml creation KMM provides an API to create an MCO YAML manifest for the Day 1 functionality: ProduceMachineConfig(machineConfigName, machineConfigPoolRef, kernelModuleImage, kernelModuleName string) (string, error) The returned output is a string representation of the MCO YAML manifest to be applied. It is up to the customer to apply this YAML. The parameters are: machineConfigName The name of the MCO YAML manifest. This parameter is set as the name parameter of the metadata of the MCO YAML manifest. machineConfigPoolRef The MachineConfigPool name used to identify the targeted nodes. kernelModuleImage The name of the container image that includes the OOT kernel module. kernelModuleName The name of the OOT kernel module. This parameter is used both to unload the in-tree kernel module (if loaded into the kernel) and to load the OOT kernel module. The API is located under pkg/mcproducer package of the KMM source code. The KMM operator does not need to be running to use the Day 1 functionality. You only need to import the pkg/mcproducer package into their operator/utility code, call the API, and apply the produced MCO YAML to the cluster. 4.16.6. The MachineConfigPool The MachineConfigPool identifies a collection of nodes that are affected by the applied MCO. kind: MachineConfigPool metadata: name: sfc spec: machineConfigSelector: 1 matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker, sfc]} nodeSelector: 2 matchLabels: node-role.kubernetes.io/sfc: "" paused: false maxUnavailable: 1 1 Matches the labels in the MachineConfig. 2 Matches the labels on the node. There are predefined MachineConfigPools in the OCP cluster: worker : Targets all worker nodes in the cluster master : Targets all master nodes in the cluster Define the following MachineConfig to target the master MachineConfigPool : metadata: labels: machineconfiguration.opensfhit.io/role: master Define the following MachineConfig to target the worker MachineConfigPool : metadata: labels: machineconfiguration.opensfhit.io/role: worker Additional resources About MachineConfigPool 4.17. Debugging and troubleshooting If the kmods in your driver container are not signed or are signed with the wrong key, then the container can enter a PostStartHookError or CrashLoopBackOff status. You can verify by running the oc describe command on your container, which displays the following message in this scenario: modprobe: ERROR: could not insert '<your_kmod_name>': Required key not available 4.18. KMM firmware support Kernel modules sometimes need to load firmware files from the file system. KMM supports copying firmware files from the kmod image to the node's file system. The contents of .spec.moduleLoader.container.modprobe.firmwarePath are copied into the /var/lib/firmware path on the node before running the modprobe command to insert the kernel module. All files and empty directories are removed from that location before running the modprobe -r command to unload the kernel module, when the pod is terminated. 4.18.1. Configuring the lookup path on nodes On OpenShift Container Platform nodes, the set of default lookup paths for firmwares does not include the /var/lib/firmware path. Procedure Use the Machine Config Operator to create a MachineConfig custom resource (CR) that contains the /var/lib/firmware path: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 99-worker-kernel-args-firmware-path spec: kernelArguments: - 'firmware_class.path=/var/lib/firmware' 1 You can configure the label based on your needs. In the case of single-node OpenShift, use either control-pane or master objects. By applying the MachineConfig CR, the nodes are automatically rebooted. Additional resources Machine Config Operator . 4.18.2. Building a kmod image Procedure In addition to building the kernel module itself, include the binary firmware in the builder image: FROM registry.redhat.io/ubi9/ubi-minimal as builder # Build the kmod RUN ["mkdir", "/firmware"] RUN ["curl", "-o", "/firmware/firmware.bin", "https://artifacts.example.com/firmware.bin"] FROM registry.redhat.io/ubi9/ubi-minimal # Copy the kmod, install modprobe, run depmod COPY --from=builder /firmware /firmware 4.18.3. Tuning the Module resource Procedure Set .spec.moduleLoader.container.modprobe.firmwarePath in the Module custom resource (CR): apiVersion: kmm.sigs.x-k8s.io/v1beta1 kind: Module metadata: name: my-kmod spec: moduleLoader: container: modprobe: moduleName: my-kmod # Required firmwarePath: /firmware 1 1 Optional: Copies /firmware/* into /var/lib/firmware/ on the node. 4.19. Day 0 through Day 2 kmod installation You can install some kernel modules (kmods) during Day 0 through Day 2 operations without Kernel Module Management (KMM). This could assist in the transition of the kmods to KMM. Use the following criteria to determine suitable kmod installations. Day 0 The most basic kmods that are required for a node to become Ready in the cluster. Examples of these types of kmods include: A storage driver that is required to mount the rootFS as part of the boot process A network driver that is required for the machine to access machine-config-server on the bootstrap node to pull the ignition and join the cluster Day 1 Kmods that are not required for a node to become Ready in the cluster but cannot be unloaded when the node is Ready . An example of this type of kmod is an out-of-tree (OOT) network driver that replaces an outdated in-tree driver to exploit the full potential of the NIC while NetworkManager depends on it. When the node is Ready , you cannot unload the driver because of the NetworkManager dependency. Day 2 Kmods that can be dynamically loaded to the kernel or removed from it without interfering with the cluster infrastructure, for example, connectivity. Examples of these types of kmods include: GPU operators Secondary network adapters field-programmable gate arrays (FPGAs) 4.19.1. Layering background When a Day 0 kmod is installed in the cluster, layering is applied through the Machine Config Operator (MCO) and OpenShift Container Platform upgrades do not trigger node upgrades. You only need to recompile the driver if you add new features to it, because the node's operating system will remain the same. 4.19.2. Lifecycle management You can leverage KMM to manage the Day 0 through Day 2 lifecycle of kmods without a reboot when the driver allows it. Note This will not work if the upgrade requires a node reboot, for example, when rebuilding initramfs files is needed. Use one of the following options for lifecycle management. 4.19.2.1. Treat the kmod as an in-tree driver Use this method when you want to upgrade the kmods. In this case, treat the kmod as an in-tree driver and create a Module in the cluster with the inTreeRemoval field to unload the old version of the driver. Note the following characteristics of treating the kmod as an in-tree driver: Downtime might occur as KMM tries to unload and load the kmod on all the selected nodes simultaneously. This works if removing the driver makes the node lose connectivity because KMM uses a single pod to unload and load the driver. 4.19.2.2. Use ordered upgrade You can use ordered upgrade (ordered_upgrade.md) to create a versioned Module in the cluster representing the kmods with no effect, because the kmods are already loaded. Note the following characteristics of using ordered upgrade: There is no cluster downtime because you control the pace of the upgrade and how many nodes are upgraded at the same time; therefore, an upgrade with no downtime is possible. This method will not work if unloading the driver results in losing connection to the node, because KMM creates two different worker pods for unloading and another for loading. These pods will not be scheduled. 4.20. Troubleshooting KMM When troubleshooting KMM installation issues, you can monitor logs to determine at which stage issues occur. Then, retrieve diagnostic data relevant to that stage. 4.20.1. Reading Operator logs You can use the oc logs command to read Operator logs, as in the following examples. Example command for KMM controller USD oc logs -fn openshift-kmm deployments/kmm-operator-controller Example command for KMM webhook server USD oc logs -fn openshift-kmm deployments/kmm-operator-webhook-server Example command for KMM-Hub controller USD oc logs -fn openshift-kmm-hub deployments/kmm-operator-hub-controller Example command for KMM-Hub webhook server USD oc logs -fn openshift-kmm deployments/kmm-operator-hub-webhook-server 4.20.2. Observing events Use the following methods to view KMM events. Build & sign KMM publishes events whenever it starts a kmod image build or observes its outcome. These events are attached to Module objects and are available at the end of the output of oc describe module command, as in the following example: USD oc describe modules.kmm.sigs.x-k8s.io kmm-ci-a [...] Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal BuildCreated 2m29s kmm Build created for kernel 6.6.2-201.fc39.x86_64 Normal BuildSucceeded 63s kmm Build job succeeded for kernel 6.6.2-201.fc39.x86_64 Normal SignCreated 64s (x2 over 64s) kmm Sign created for kernel 6.6.2-201.fc39.x86_64 Normal SignSucceeded 57s kmm Sign job succeeded for kernel 6.6.2-201.fc39.x86_64 Module load or unload KMM publishes events whenever it successfully loads or unloads a kernel module on a node. These events are attached to Node objects and are available at the end of the output of oc describe node command, as in the following example: USD oc describe node my-node [...] Events: Type Reason Age From Message ---- ------ ---- ---- ------- [...] Normal ModuleLoaded 4m17s kmm Module default/kmm-ci-a loaded into the kernel Normal ModuleUnloaded 2s kmm Module default/kmm-ci-a unloaded from the kernel 4.20.3. Using the must-gather tool The oc adm must-gather command is the preferred way to collect a support bundle and provide debugging information to Red Hat Support. Collect specific information by running the command with the appropriate arguments as described in the following sections. Additional resources About the must-gather tool 4.20.3.1. Gathering data for KMM Procedure Gather the data for the KMM Operator controller manager: Set the MUST_GATHER_IMAGE variable: USD export MUST_GATHER_IMAGE=USD(oc get deployment -n openshift-kmm kmm-operator-controller -ojsonpath='{.spec.template.spec.containers[?(@.name=="manager")].env[?(@.name=="RELATED_IMAGE_MUST_GATHER")].value}') USD oc adm must-gather --image="USD{MUST_GATHER_IMAGE}" -- /usr/bin/gather Note Use the -n <namespace> switch to specify a namespace if you installed KMM in a custom namespace. Run the must-gather tool: USD oc adm must-gather --image="USD{MUST_GATHER_IMAGE}" -- /usr/bin/gather View the Operator logs: USD oc logs -fn openshift-kmm deployments/kmm-operator-controller Example 4.1. Example output I0228 09:36:37.352405 1 request.go:682] Waited for 1.001998746s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/apis/machine.openshift.io/v1beta1?timeout=32s I0228 09:36:40.767060 1 listener.go:44] kmm/controller-runtime/metrics "msg"="Metrics server is starting to listen" "addr"="127.0.0.1:8080" I0228 09:36:40.769483 1 main.go:234] kmm/setup "msg"="starting manager" I0228 09:36:40.769907 1 internal.go:366] kmm "msg"="Starting server" "addr"={"IP":"127.0.0.1","Port":8080,"Zone":""} "kind"="metrics" "path"="/metrics" I0228 09:36:40.770025 1 internal.go:366] kmm "msg"="Starting server" "addr"={"IP":"::","Port":8081,"Zone":""} "kind"="health probe" I0228 09:36:40.770128 1 leaderelection.go:248] attempting to acquire leader lease openshift-kmm/kmm.sigs.x-k8s.io... I0228 09:36:40.784396 1 leaderelection.go:258] successfully acquired lease openshift-kmm/kmm.sigs.x-k8s.io I0228 09:36:40.784876 1 controller.go:185] kmm "msg"="Starting EventSource" "controller"="Module" "controllerGroup"="kmm.sigs.x-k8s.io" "controllerKind"="Module" "source"="kind source: *v1beta1.Module" I0228 09:36:40.784925 1 controller.go:185] kmm "msg"="Starting EventSource" "controller"="Module" "controllerGroup"="kmm.sigs.x-k8s.io" "controllerKind"="Module" "source"="kind source: *v1.DaemonSet" I0228 09:36:40.784968 1 controller.go:185] kmm "msg"="Starting EventSource" "controller"="Module" "controllerGroup"="kmm.sigs.x-k8s.io" "controllerKind"="Module" "source"="kind source: *v1.Build" I0228 09:36:40.785001 1 controller.go:185] kmm "msg"="Starting EventSource" "controller"="Module" "controllerGroup"="kmm.sigs.x-k8s.io" "controllerKind"="Module" "source"="kind source: *v1.Job" I0228 09:36:40.785025 1 controller.go:185] kmm "msg"="Starting EventSource" "controller"="Module" "controllerGroup"="kmm.sigs.x-k8s.io" "controllerKind"="Module" "source"="kind source: *v1.Node" I0228 09:36:40.785039 1 controller.go:193] kmm "msg"="Starting Controller" "controller"="Module" "controllerGroup"="kmm.sigs.x-k8s.io" "controllerKind"="Module" I0228 09:36:40.785458 1 controller.go:185] kmm "msg"="Starting EventSource" "controller"="PodNodeModule" "controllerGroup"="" "controllerKind"="Pod" "source"="kind source: *v1.Pod" I0228 09:36:40.786947 1 controller.go:185] kmm "msg"="Starting EventSource" "controller"="PreflightValidation" "controllerGroup"="kmm.sigs.x-k8s.io" "controllerKind"="PreflightValidation" "source"="kind source: *v1beta1.PreflightValidation" I0228 09:36:40.787406 1 controller.go:185] kmm "msg"="Starting EventSource" "controller"="PreflightValidation" "controllerGroup"="kmm.sigs.x-k8s.io" "controllerKind"="PreflightValidation" "source"="kind source: *v1.Build" I0228 09:36:40.787474 1 controller.go:185] kmm "msg"="Starting EventSource" "controller"="PreflightValidation" "controllerGroup"="kmm.sigs.x-k8s.io" "controllerKind"="PreflightValidation" "source"="kind source: *v1.Job" I0228 09:36:40.787488 1 controller.go:185] kmm "msg"="Starting EventSource" "controller"="PreflightValidation" "controllerGroup"="kmm.sigs.x-k8s.io" "controllerKind"="PreflightValidation" "source"="kind source: *v1beta1.Module" I0228 09:36:40.787603 1 controller.go:185] kmm "msg"="Starting EventSource" "controller"="NodeKernel" "controllerGroup"="" "controllerKind"="Node" "source"="kind source: *v1.Node" I0228 09:36:40.787634 1 controller.go:193] kmm "msg"="Starting Controller" "controller"="NodeKernel" "controllerGroup"="" "controllerKind"="Node" I0228 09:36:40.787680 1 controller.go:193] kmm "msg"="Starting Controller" "controller"="PreflightValidation" "controllerGroup"="kmm.sigs.x-k8s.io" "controllerKind"="PreflightValidation" I0228 09:36:40.785607 1 controller.go:185] kmm "msg"="Starting EventSource" "controller"="imagestream" "controllerGroup"="image.openshift.io" "controllerKind"="ImageStream" "source"="kind source: *v1.ImageStream" I0228 09:36:40.787822 1 controller.go:185] kmm "msg"="Starting EventSource" "controller"="preflightvalidationocp" "controllerGroup"="kmm.sigs.x-k8s.io" "controllerKind"="PreflightValidationOCP" "source"="kind source: *v1beta1.PreflightValidationOCP" I0228 09:36:40.787853 1 controller.go:193] kmm "msg"="Starting Controller" "controller"="imagestream" "controllerGroup"="image.openshift.io" "controllerKind"="ImageStream" I0228 09:36:40.787879 1 controller.go:185] kmm "msg"="Starting EventSource" "controller"="preflightvalidationocp" "controllerGroup"="kmm.sigs.x-k8s.io" "controllerKind"="PreflightValidationOCP" "source"="kind source: *v1beta1.PreflightValidation" I0228 09:36:40.787905 1 controller.go:193] kmm "msg"="Starting Controller" "controller"="preflightvalidationocp" "controllerGroup"="kmm.sigs.x-k8s.io" "controllerKind"="PreflightValidationOCP" I0228 09:36:40.786489 1 controller.go:193] kmm "msg"="Starting Controller" "controller"="PodNodeModule" "controllerGroup"="" "controllerKind"="Pod" 4.20.3.2. Gathering data for KMM-Hub Procedure Gather the data for the KMM Operator hub controller manager: Set the MUST_GATHER_IMAGE variable: USD export MUST_GATHER_IMAGE=USD(oc get deployment -n openshift-kmm-hub kmm-operator-hub-controller -ojsonpath='{.spec.template.spec.containers[?(@.name=="manager")].env[?(@.name=="RELATED_IMAGE_MUST_GATHER")].value}') USD oc adm must-gather --image="USD{MUST_GATHER_IMAGE}" -- /usr/bin/gather -u Note Use the -n <namespace> switch to specify a namespace if you installed KMM in a custom namespace. Run the must-gather tool: USD oc adm must-gather --image="USD{MUST_GATHER_IMAGE}" -- /usr/bin/gather -u View the Operator logs: USD oc logs -fn openshift-kmm-hub deployments/kmm-operator-hub-controller Example 4.2. Example output I0417 11:34:08.807472 1 request.go:682] Waited for 1.023403273s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/apis/tuned.openshift.io/v1?timeout=32s I0417 11:34:12.373413 1 listener.go:44] kmm-hub/controller-runtime/metrics "msg"="Metrics server is starting to listen" "addr"="127.0.0.1:8080" I0417 11:34:12.376253 1 main.go:150] kmm-hub/setup "msg"="Adding controller" "name"="ManagedClusterModule" I0417 11:34:12.376621 1 main.go:186] kmm-hub/setup "msg"="starting manager" I0417 11:34:12.377690 1 leaderelection.go:248] attempting to acquire leader lease openshift-kmm-hub/kmm-hub.sigs.x-k8s.io... I0417 11:34:12.378078 1 internal.go:366] kmm-hub "msg"="Starting server" "addr"={"IP":"127.0.0.1","Port":8080,"Zone":""} "kind"="metrics" "path"="/metrics" I0417 11:34:12.378222 1 internal.go:366] kmm-hub "msg"="Starting server" "addr"={"IP":"::","Port":8081,"Zone":""} "kind"="health probe" I0417 11:34:12.395703 1 leaderelection.go:258] successfully acquired lease openshift-kmm-hub/kmm-hub.sigs.x-k8s.io I0417 11:34:12.396334 1 controller.go:185] kmm-hub "msg"="Starting EventSource" "controller"="ManagedClusterModule" "controllerGroup"="hub.kmm.sigs.x-k8s.io" "controllerKind"="ManagedClusterModule" "source"="kind source: *v1beta1.ManagedClusterModule" I0417 11:34:12.396403 1 controller.go:185] kmm-hub "msg"="Starting EventSource" "controller"="ManagedClusterModule" "controllerGroup"="hub.kmm.sigs.x-k8s.io" "controllerKind"="ManagedClusterModule" "source"="kind source: *v1.ManifestWork" I0417 11:34:12.396430 1 controller.go:185] kmm-hub "msg"="Starting EventSource" "controller"="ManagedClusterModule" "controllerGroup"="hub.kmm.sigs.x-k8s.io" "controllerKind"="ManagedClusterModule" "source"="kind source: *v1.Build" I0417 11:34:12.396469 1 controller.go:185] kmm-hub "msg"="Starting EventSource" "controller"="ManagedClusterModule" "controllerGroup"="hub.kmm.sigs.x-k8s.io" "controllerKind"="ManagedClusterModule" "source"="kind source: *v1.Job" I0417 11:34:12.396522 1 controller.go:185] kmm-hub "msg"="Starting EventSource" "controller"="ManagedClusterModule" "controllerGroup"="hub.kmm.sigs.x-k8s.io" "controllerKind"="ManagedClusterModule" "source"="kind source: *v1.ManagedCluster" I0417 11:34:12.396543 1 controller.go:193] kmm-hub "msg"="Starting Controller" "controller"="ManagedClusterModule" "controllerGroup"="hub.kmm.sigs.x-k8s.io" "controllerKind"="ManagedClusterModule" I0417 11:34:12.397175 1 controller.go:185] kmm-hub "msg"="Starting EventSource" "controller"="imagestream" "controllerGroup"="image.openshift.io" "controllerKind"="ImageStream" "source"="kind source: *v1.ImageStream" I0417 11:34:12.397221 1 controller.go:193] kmm-hub "msg"="Starting Controller" "controller"="imagestream" "controllerGroup"="image.openshift.io" "controllerKind"="ImageStream" I0417 11:34:12.498335 1 filter.go:196] kmm-hub "msg"="Listing all ManagedClusterModules" "managedcluster"="local-cluster" I0417 11:34:12.498570 1 filter.go:205] kmm-hub "msg"="Listed ManagedClusterModules" "count"=0 "managedcluster"="local-cluster" I0417 11:34:12.498629 1 filter.go:238] kmm-hub "msg"="Adding reconciliation requests" "count"=0 "managedcluster"="local-cluster" I0417 11:34:12.498687 1 filter.go:196] kmm-hub "msg"="Listing all ManagedClusterModules" "managedcluster"="sno1-0" I0417 11:34:12.498750 1 filter.go:205] kmm-hub "msg"="Listed ManagedClusterModules" "count"=0 "managedcluster"="sno1-0" I0417 11:34:12.498801 1 filter.go:238] kmm-hub "msg"="Adding reconciliation requests" "count"=0 "managedcluster"="sno1-0" I0417 11:34:12.501947 1 controller.go:227] kmm-hub "msg"="Starting workers" "controller"="imagestream" "controllerGroup"="image.openshift.io" "controllerKind"="ImageStream" "worker count"=1 I0417 11:34:12.501948 1 controller.go:227] kmm-hub "msg"="Starting workers" "controller"="ManagedClusterModule" "controllerGroup"="hub.kmm.sigs.x-k8s.io" "controllerKind"="ManagedClusterModule" "worker count"=1 I0417 11:34:12.502285 1 imagestream_reconciler.go:50] kmm-hub "msg"="registered imagestream info mapping" "ImageStream"={"name":"driver-toolkit","namespace":"openshift"} "controller"="imagestream" "controllerGroup"="image.openshift.io" "controllerKind"="ImageStream" "dtkImage"="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:df42b4785a7a662b30da53bdb0d206120cf4d24b45674227b16051ba4b7c3934" "name"="driver-toolkit" "namespace"="openshift" "osImageVersion"="412.86.202302211547-0" "reconcileID"="e709ff0a-5664-4007-8270-49b5dff8bae9"
[ "apiVersion: v1 kind: Namespace metadata: name: openshift-kmm", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: kernel-module-management namespace: openshift-kmm", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: kernel-module-management namespace: openshift-kmm spec: channel: release-1.0 installPlanApproval: Automatic name: kernel-module-management source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: kernel-module-management.v1.0.0", "oc create -f kmm-sub.yaml", "oc get -n openshift-kmm deployments.apps kmm-operator-controller", "NAME READY UP-TO-DATE AVAILABLE AGE kmm-operator-controller 1/1 1 1 97s", "apiVersion: v1 kind: Namespace metadata: name: openshift-kmm", "allowHostDirVolumePlugin: false allowHostIPC: false allowHostNetwork: false allowHostPID: false allowHostPorts: false allowPrivilegeEscalation: false allowPrivilegedContainer: false allowedCapabilities: - NET_BIND_SERVICE apiVersion: security.openshift.io/v1 defaultAddCapabilities: null fsGroup: type: MustRunAs groups: [] kind: SecurityContextConstraints metadata: name: restricted-v2 priority: null readOnlyRootFilesystem: false requiredDropCapabilities: - ALL runAsUser: type: MustRunAsRange seLinuxContext: type: MustRunAs seccompProfiles: - runtime/default supplementalGroups: type: RunAsAny users: [] volumes: - configMap - downwardAPI - emptyDir - persistentVolumeClaim - projected - secret", "oc apply -f kmm-security-constraint.yaml", "oc adm policy add-scc-to-user kmm-security-constraint -z kmm-operator-controller -n openshift-kmm", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: kernel-module-management namespace: openshift-kmm", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: kernel-module-management namespace: openshift-kmm spec: channel: release-1.0 installPlanApproval: Automatic name: kernel-module-management source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: kernel-module-management.v1.0.0", "oc create -f kmm-sub.yaml", "oc get -n openshift-kmm deployments.apps kmm-operator-controller", "NAME READY UP-TO-DATE AVAILABLE AGE kmm-operator-controller 1/1 1 1 97s", "oc edit configmap -n \"USDnamespace\" kmm-operator-manager-config", "healthProbeBindAddress: :8081 job: gcDelay: 1h leaderElection: enabled: true resourceID: kmm.sigs.x-k8s.io webhook: disableHTTP2: true # CVE-2023-44487 port: 9443 metrics: enableAuthnAuthz: true disableHTTP2: true # CVE-2023-44487 bindAddress: 0.0.0.0:8443 secureServing: true worker: runAsUser: 0 seLinuxType: spc_t setFirmwareClassPath: /var/lib/firmware", "oc delete pod -n \"<namespace>\" -l app.kubernetes.io/component=kmm", "oc delete -k https://github.com/rh-ecosystem-edge/kernel-module-management/config/default", "spec: moduleLoader: container: modprobe: moduleName: mod_a dirName: /opt firmwarePath: /firmware parameters: - param=1 modulesLoadingOrder: - mod_a - mod_b", "oc adm policy add-scc-to-user privileged -z \"USD{serviceAccountName}\" [ -n \"USD{namespace}\" ]", "spec: moduleLoader: container: modprobe: moduleName: mod_a inTreeModulesToRemove: [mod_a, mod_b]", "spec: moduleLoader: container: kernelMappings: - literal: 6.0.15-300.fc37.x86_64 containerImage: \"some.registry/org/my-kmod:USD{KERNEL_FULL_VERSION}\" inTreeModulesToRemove: [<module_name>, <module_name>]", "apiVersion: kmm.sigs.x-k8s.io/v1beta1 kind: Module metadata: name: <my_kmod> spec: moduleLoader: container: modprobe: moduleName: <my_kmod> 1 dirName: /opt 2 firmwarePath: /firmware 3 parameters: 4 - param=1 kernelMappings: 5 - literal: 6.0.15-300.fc37.x86_64 containerImage: some.registry/org/my-kmod:6.0.15-300.fc37.x86_64 - regexp: '^.+\\fc37\\.x86_64USD' 6 containerImage: \"some.other.registry/org/<my_kmod>:USD{KERNEL_FULL_VERSION}\" - regexp: '^.+USD' 7 containerImage: \"some.registry/org/<my_kmod>:USD{KERNEL_FULL_VERSION}\" build: buildArgs: 8 - name: ARG_NAME value: <some_value> secrets: - name: <some_kubernetes_secret> 9 baseImageRegistryTLS: 10 insecure: false insecureSkipTLSVerify: false 11 dockerfileConfigMap: 12 name: <my_kmod_dockerfile> sign: certSecret: name: <cert_secret> 13 keySecret: name: <key_secret> 14 filesToSign: - /opt/lib/modules/USD{KERNEL_FULL_VERSION}/<my_kmod>.ko registryTLS: 15 insecure: false 16 insecureSkipTLSVerify: false serviceAccountName: <sa_module_loader> 17 devicePlugin: 18 container: image: some.registry/org/device-plugin:latest 19 env: - name: MY_DEVICE_PLUGIN_ENV_VAR value: SOME_VALUE volumeMounts: 20 - mountPath: /some/mountPath name: <device_plugin_volume> volumes: 21 - name: <device_plugin_volume> configMap: name: <some_configmap> serviceAccountName: <sa_device_plugin> 22 imageRepoSecret: 23 name: <secret_name> selector: node-role.kubernetes.io/worker: \"\"", "ARG DTK_AUTO FROM USD{DTK_AUTO} as builder # Build steps # FROM ubi9/ubi ARG KERNEL_FULL_VERSION RUN dnf update && dnf install -y kmod COPY --from=builder /usr/src/kernel-module-management/ci/kmm-kmod/kmm_ci_a.ko /opt/lib/modules/USD{KERNEL_FULL_VERSION}/ COPY --from=builder /usr/src/kernel-module-management/ci/kmm-kmod/kmm_ci_b.ko /opt/lib/modules/USD{KERNEL_FULL_VERSION}/ Create the symbolic link RUN ln -s /lib/modules/USD{KERNEL_FULL_VERSION} /opt/lib/modules/USD{KERNEL_FULL_VERSION}/host RUN depmod -b /opt USD{KERNEL_FULL_VERSION}", "depmod -b /opt USD{KERNEL_FULL_VERSION}+`.", "apiVersion: v1 kind: ConfigMap metadata: name: kmm-ci-dockerfile data: dockerfile: | ARG DTK_AUTO FROM USD{DTK_AUTO} as builder ARG KERNEL_FULL_VERSION WORKDIR /usr/src RUN [\"git\", \"clone\", \"https://github.com/rh-ecosystem-edge/kernel-module-management.git\"] WORKDIR /usr/src/kernel-module-management/ci/kmm-kmod RUN KERNEL_SRC_DIR=/lib/modules/USD{KERNEL_FULL_VERSION}/build make all FROM registry.redhat.io/ubi9/ubi-minimal ARG KERNEL_FULL_VERSION RUN microdnf install kmod COPY --from=builder /usr/src/kernel-module-management/ci/kmm-kmod/kmm_ci_a.ko /opt/lib/modules/USD{KERNEL_FULL_VERSION}/ COPY --from=builder /usr/src/kernel-module-management/ci/kmm-kmod/kmm_ci_b.ko /opt/lib/modules/USD{KERNEL_FULL_VERSION}/ RUN depmod -b /opt USD{KERNEL_FULL_VERSION}", "- regexp: '^.+USD' containerImage: \"some.registry/org/<my_kmod>:USD{KERNEL_FULL_VERSION}\" build: buildArgs: 1 - name: ARG_NAME value: <some_value> secrets: 2 - name: <some_kubernetes_secret> 3 baseImageRegistryTLS: insecure: false 4 insecureSkipTLSVerify: false 5 dockerfileConfigMap: 6 name: <my_kmod_dockerfile> registryTLS: insecure: false 7 insecureSkipTLSVerify: false 8", "ARG DTK_AUTO FROM USD{DTK_AUTO} as builder ARG KERNEL_FULL_VERSION WORKDIR /usr/src RUN [\"git\", \"clone\", \"https://github.com/rh-ecosystem-edge/kernel-module-management.git\"] WORKDIR /usr/src/kernel-module-management/ci/kmm-kmod RUN KERNEL_SRC_DIR=/lib/modules/USD{KERNEL_FULL_VERSION}/build make all FROM ubi9/ubi-minimal ARG KERNEL_FULL_VERSION RUN microdnf install kmod COPY --from=builder /usr/src/kernel-module-management/ci/kmm-kmod/kmm_ci_a.ko /opt/lib/modules/USD{KERNEL_FULL_VERSION}/ COPY --from=builder /usr/src/kernel-module-management/ci/kmm-kmod/kmm_ci_b.ko /opt/lib/modules/USD{KERNEL_FULL_VERSION}/ RUN depmod -b /opt USD{KERNEL_FULL_VERSION}", "openssl req -x509 -new -nodes -utf8 -sha256 -days 36500 -batch -config configuration_file.config -outform DER -out my_signing_key_pub.der -keyout my_signing_key.priv", "oc create secret generic my-signing-key --from-file=key=<my_signing_key.priv>", "oc create secret generic my-signing-key-pub --from-file=cert=<my_signing_key_pub.der>", "cat sb_cert.priv | base64 -w 0 > my_signing_key2.base64", "cat sb_cert.cer | base64 -w 0 > my_signing_key_pub.base64", "apiVersion: v1 kind: Secret metadata: name: my-signing-key-pub namespace: default 1 type: Opaque data: cert: <base64_encoded_secureboot_public_key> --- apiVersion: v1 kind: Secret metadata: name: my-signing-key namespace: default 2 type: Opaque data: key: <base64_encoded_secureboot_private_key>", "oc apply -f <yaml_filename>", "oc get secret -o yaml <certificate secret name> | awk '/cert/{print USD2; exit}' | base64 -d | openssl x509 -inform der -text", "oc get secret -o yaml <private key secret name> | awk '/key/{print USD2; exit}' | base64 -d", "--- apiVersion: kmm.sigs.x-k8s.io/v1beta1 kind: Module metadata: name: example-module spec: moduleLoader: serviceAccountName: default container: modprobe: 1 moduleName: '<module_name>' kernelMappings: # the kmods will be deployed on all nodes in the cluster with a kernel that matches the regexp - regexp: '^.*\\.x86_64USD' # the container to produce containing the signed kmods containerImage: <image_name> 2 sign: # the image containing the unsigned kmods (we need this because we are not building the kmods within the cluster) unsignedImage: <image_name> 3 keySecret: # a secret holding the private secureboot key with the key 'key' name: <private_key_secret_name> certSecret: # a secret holding the public secureboot key with the key 'cert' name: <certificate_secret_name> filesToSign: # full path within the unsignedImage container to the kmod(s) to sign - /opt/lib/modules/4.18.0-348.2.1.el8_5.x86_64/kmm_ci_a.ko imageRepoSecret: # the name of a secret containing credentials to pull unsignedImage and push containerImage to the registry name: repo-pull-secret selector: kubernetes.io/arch: amd64", "--- apiVersion: v1 kind: ConfigMap metadata: name: example-module-dockerfile namespace: <namespace> 1 data: Dockerfile: | ARG DTK_AUTO ARG KERNEL_VERSION FROM USD{DTK_AUTO} as builder WORKDIR /build/ RUN git clone -b main --single-branch https://github.com/rh-ecosystem-edge/kernel-module-management.git WORKDIR kernel-module-management/ci/kmm-kmod/ RUN make FROM registry.access.redhat.com/ubi9/ubi:latest ARG KERNEL_VERSION RUN yum -y install kmod && yum clean all RUN mkdir -p /opt/lib/modules/USD{KERNEL_VERSION} COPY --from=builder /build/kernel-module-management/ci/kmm-kmod/*.ko /opt/lib/modules/USD{KERNEL_VERSION}/ RUN /usr/sbin/depmod -b /opt --- apiVersion: kmm.sigs.x-k8s.io/v1beta1 kind: Module metadata: name: example-module namespace: <namespace> 2 spec: moduleLoader: serviceAccountName: default 3 container: modprobe: moduleName: simple_kmod kernelMappings: - regexp: '^.*\\.x86_64USD' containerImage: <final_driver_container_name> build: dockerfileConfigMap: name: example-module-dockerfile sign: keySecret: name: <private_key_secret_name> certSecret: name: <certificate_secret_name> filesToSign: - /opt/lib/modules/4.18.0-348.2.1.el8_5.x86_64/kmm_ci_a.ko imageRepoSecret: 4 name: repo-pull-secret selector: # top-level selector kubernetes.io/arch: amd64", "--- apiVersion: v1 kind: Namespace metadata: name: openshift-kmm-hub --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: kernel-module-management-hub namespace: openshift-kmm-hub --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: kernel-module-management-hub namespace: openshift-kmm-hub spec: channel: stable installPlanApproval: Automatic name: kernel-module-management-hub source: redhat-operators sourceNamespace: openshift-marketplace", "apiVersion: hub.kmm.sigs.x-k8s.io/v1beta1 kind: ManagedClusterModule metadata: name: <my-mcm> # No namespace, because this resource is cluster-scoped. spec: moduleSpec: 1 selector: 2 node-wants-my-mcm: 'true' spokeNamespace: <some-namespace> 3 selector: 4 wants-my-mcm: 'true'", "--- apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: install-kmm spec: remediationAction: enforce disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: install-kmm spec: severity: high object-templates: - complianceType: mustonlyhave objectDefinition: apiVersion: v1 kind: Namespace metadata: name: openshift-kmm - complianceType: mustonlyhave objectDefinition: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: kmm namespace: openshift-kmm spec: upgradeStrategy: Default - complianceType: mustonlyhave objectDefinition: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: kernel-module-management namespace: openshift-kmm spec: channel: stable config: env: - name: KMM_MANAGED 1 value: \"1\" installPlanApproval: Automatic name: kernel-module-management source: redhat-operators sourceNamespace: openshift-marketplace - complianceType: mustonlyhave objectDefinition: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: kmm-module-manager rules: - apiGroups: [kmm.sigs.x-k8s.io] resources: [modules] verbs: [create, delete, get, list, patch, update, watch] - complianceType: mustonlyhave objectDefinition: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: klusterlet-kmm subjects: - kind: ServiceAccount name: klusterlet-work-sa namespace: open-cluster-management-agent roleRef: kind: ClusterRole name: kmm-module-manager apiGroup: rbac.authorization.k8s.io --- apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: all-managed-clusters spec: clusterSelector: 2 matchExpressions: [] --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: install-kmm placementRef: apiGroup: apps.open-cluster-management.io kind: PlacementRule name: all-managed-clusters subjects: - apiGroup: policy.open-cluster-management.io kind: Policy name: install-kmm", "oc label node/<node_name> kmm.node.kubernetes.io/version-module.<module_namespace>.<module_name>-", "oc label node/<node_name> kmm.node.kubernetes.io/version-module.<module_namespace>.<module_name>=<desired_version>", "ProduceMachineConfig(machineConfigName, machineConfigPoolRef, kernelModuleImage, kernelModuleName string) (string, error)", "kind: MachineConfigPool metadata: name: sfc spec: machineConfigSelector: 1 matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker, sfc]} nodeSelector: 2 matchLabels: node-role.kubernetes.io/sfc: \"\" paused: false maxUnavailable: 1", "metadata: labels: machineconfiguration.opensfhit.io/role: master", "metadata: labels: machineconfiguration.opensfhit.io/role: worker", "modprobe: ERROR: could not insert '<your_kmod_name>': Required key not available", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 99-worker-kernel-args-firmware-path spec: kernelArguments: - 'firmware_class.path=/var/lib/firmware'", "FROM registry.redhat.io/ubi9/ubi-minimal as builder Build the kmod RUN [\"mkdir\", \"/firmware\"] RUN [\"curl\", \"-o\", \"/firmware/firmware.bin\", \"https://artifacts.example.com/firmware.bin\"] FROM registry.redhat.io/ubi9/ubi-minimal Copy the kmod, install modprobe, run depmod COPY --from=builder /firmware /firmware", "apiVersion: kmm.sigs.x-k8s.io/v1beta1 kind: Module metadata: name: my-kmod spec: moduleLoader: container: modprobe: moduleName: my-kmod # Required firmwarePath: /firmware 1", "oc logs -fn openshift-kmm deployments/kmm-operator-controller", "oc logs -fn openshift-kmm deployments/kmm-operator-webhook-server", "oc logs -fn openshift-kmm-hub deployments/kmm-operator-hub-controller", "oc logs -fn openshift-kmm deployments/kmm-operator-hub-webhook-server", "oc describe modules.kmm.sigs.x-k8s.io kmm-ci-a [...] Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal BuildCreated 2m29s kmm Build created for kernel 6.6.2-201.fc39.x86_64 Normal BuildSucceeded 63s kmm Build job succeeded for kernel 6.6.2-201.fc39.x86_64 Normal SignCreated 64s (x2 over 64s) kmm Sign created for kernel 6.6.2-201.fc39.x86_64 Normal SignSucceeded 57s kmm Sign job succeeded for kernel 6.6.2-201.fc39.x86_64", "oc describe node my-node [...] Events: Type Reason Age From Message ---- ------ ---- ---- ------- [...] Normal ModuleLoaded 4m17s kmm Module default/kmm-ci-a loaded into the kernel Normal ModuleUnloaded 2s kmm Module default/kmm-ci-a unloaded from the kernel", "export MUST_GATHER_IMAGE=USD(oc get deployment -n openshift-kmm kmm-operator-controller -ojsonpath='{.spec.template.spec.containers[?(@.name==\"manager\")].env[?(@.name==\"RELATED_IMAGE_MUST_GATHER\")].value}') oc adm must-gather --image=\"USD{MUST_GATHER_IMAGE}\" -- /usr/bin/gather", "oc adm must-gather --image=\"USD{MUST_GATHER_IMAGE}\" -- /usr/bin/gather", "oc logs -fn openshift-kmm deployments/kmm-operator-controller", "I0228 09:36:37.352405 1 request.go:682] Waited for 1.001998746s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/apis/machine.openshift.io/v1beta1?timeout=32s I0228 09:36:40.767060 1 listener.go:44] kmm/controller-runtime/metrics \"msg\"=\"Metrics server is starting to listen\" \"addr\"=\"127.0.0.1:8080\" I0228 09:36:40.769483 1 main.go:234] kmm/setup \"msg\"=\"starting manager\" I0228 09:36:40.769907 1 internal.go:366] kmm \"msg\"=\"Starting server\" \"addr\"={\"IP\":\"127.0.0.1\",\"Port\":8080,\"Zone\":\"\"} \"kind\"=\"metrics\" \"path\"=\"/metrics\" I0228 09:36:40.770025 1 internal.go:366] kmm \"msg\"=\"Starting server\" \"addr\"={\"IP\":\"::\",\"Port\":8081,\"Zone\":\"\"} \"kind\"=\"health probe\" I0228 09:36:40.770128 1 leaderelection.go:248] attempting to acquire leader lease openshift-kmm/kmm.sigs.x-k8s.io I0228 09:36:40.784396 1 leaderelection.go:258] successfully acquired lease openshift-kmm/kmm.sigs.x-k8s.io I0228 09:36:40.784876 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"Module\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"Module\" \"source\"=\"kind source: *v1beta1.Module\" I0228 09:36:40.784925 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"Module\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"Module\" \"source\"=\"kind source: *v1.DaemonSet\" I0228 09:36:40.784968 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"Module\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"Module\" \"source\"=\"kind source: *v1.Build\" I0228 09:36:40.785001 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"Module\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"Module\" \"source\"=\"kind source: *v1.Job\" I0228 09:36:40.785025 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"Module\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"Module\" \"source\"=\"kind source: *v1.Node\" I0228 09:36:40.785039 1 controller.go:193] kmm \"msg\"=\"Starting Controller\" \"controller\"=\"Module\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"Module\" I0228 09:36:40.785458 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"PodNodeModule\" \"controllerGroup\"=\"\" \"controllerKind\"=\"Pod\" \"source\"=\"kind source: *v1.Pod\" I0228 09:36:40.786947 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"PreflightValidation\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"PreflightValidation\" \"source\"=\"kind source: *v1beta1.PreflightValidation\" I0228 09:36:40.787406 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"PreflightValidation\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"PreflightValidation\" \"source\"=\"kind source: *v1.Build\" I0228 09:36:40.787474 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"PreflightValidation\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"PreflightValidation\" \"source\"=\"kind source: *v1.Job\" I0228 09:36:40.787488 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"PreflightValidation\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"PreflightValidation\" \"source\"=\"kind source: *v1beta1.Module\" I0228 09:36:40.787603 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"NodeKernel\" \"controllerGroup\"=\"\" \"controllerKind\"=\"Node\" \"source\"=\"kind source: *v1.Node\" I0228 09:36:40.787634 1 controller.go:193] kmm \"msg\"=\"Starting Controller\" \"controller\"=\"NodeKernel\" \"controllerGroup\"=\"\" \"controllerKind\"=\"Node\" I0228 09:36:40.787680 1 controller.go:193] kmm \"msg\"=\"Starting Controller\" \"controller\"=\"PreflightValidation\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"PreflightValidation\" I0228 09:36:40.785607 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"imagestream\" \"controllerGroup\"=\"image.openshift.io\" \"controllerKind\"=\"ImageStream\" \"source\"=\"kind source: *v1.ImageStream\" I0228 09:36:40.787822 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"preflightvalidationocp\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"PreflightValidationOCP\" \"source\"=\"kind source: *v1beta1.PreflightValidationOCP\" I0228 09:36:40.787853 1 controller.go:193] kmm \"msg\"=\"Starting Controller\" \"controller\"=\"imagestream\" \"controllerGroup\"=\"image.openshift.io\" \"controllerKind\"=\"ImageStream\" I0228 09:36:40.787879 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"preflightvalidationocp\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"PreflightValidationOCP\" \"source\"=\"kind source: *v1beta1.PreflightValidation\" I0228 09:36:40.787905 1 controller.go:193] kmm \"msg\"=\"Starting Controller\" \"controller\"=\"preflightvalidationocp\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"PreflightValidationOCP\" I0228 09:36:40.786489 1 controller.go:193] kmm \"msg\"=\"Starting Controller\" \"controller\"=\"PodNodeModule\" \"controllerGroup\"=\"\" \"controllerKind\"=\"Pod\"", "export MUST_GATHER_IMAGE=USD(oc get deployment -n openshift-kmm-hub kmm-operator-hub-controller -ojsonpath='{.spec.template.spec.containers[?(@.name==\"manager\")].env[?(@.name==\"RELATED_IMAGE_MUST_GATHER\")].value}') oc adm must-gather --image=\"USD{MUST_GATHER_IMAGE}\" -- /usr/bin/gather -u", "oc adm must-gather --image=\"USD{MUST_GATHER_IMAGE}\" -- /usr/bin/gather -u", "oc logs -fn openshift-kmm-hub deployments/kmm-operator-hub-controller", "I0417 11:34:08.807472 1 request.go:682] Waited for 1.023403273s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/apis/tuned.openshift.io/v1?timeout=32s I0417 11:34:12.373413 1 listener.go:44] kmm-hub/controller-runtime/metrics \"msg\"=\"Metrics server is starting to listen\" \"addr\"=\"127.0.0.1:8080\" I0417 11:34:12.376253 1 main.go:150] kmm-hub/setup \"msg\"=\"Adding controller\" \"name\"=\"ManagedClusterModule\" I0417 11:34:12.376621 1 main.go:186] kmm-hub/setup \"msg\"=\"starting manager\" I0417 11:34:12.377690 1 leaderelection.go:248] attempting to acquire leader lease openshift-kmm-hub/kmm-hub.sigs.x-k8s.io I0417 11:34:12.378078 1 internal.go:366] kmm-hub \"msg\"=\"Starting server\" \"addr\"={\"IP\":\"127.0.0.1\",\"Port\":8080,\"Zone\":\"\"} \"kind\"=\"metrics\" \"path\"=\"/metrics\" I0417 11:34:12.378222 1 internal.go:366] kmm-hub \"msg\"=\"Starting server\" \"addr\"={\"IP\":\"::\",\"Port\":8081,\"Zone\":\"\"} \"kind\"=\"health probe\" I0417 11:34:12.395703 1 leaderelection.go:258] successfully acquired lease openshift-kmm-hub/kmm-hub.sigs.x-k8s.io I0417 11:34:12.396334 1 controller.go:185] kmm-hub \"msg\"=\"Starting EventSource\" \"controller\"=\"ManagedClusterModule\" \"controllerGroup\"=\"hub.kmm.sigs.x-k8s.io\" \"controllerKind\"=\"ManagedClusterModule\" \"source\"=\"kind source: *v1beta1.ManagedClusterModule\" I0417 11:34:12.396403 1 controller.go:185] kmm-hub \"msg\"=\"Starting EventSource\" \"controller\"=\"ManagedClusterModule\" \"controllerGroup\"=\"hub.kmm.sigs.x-k8s.io\" \"controllerKind\"=\"ManagedClusterModule\" \"source\"=\"kind source: *v1.ManifestWork\" I0417 11:34:12.396430 1 controller.go:185] kmm-hub \"msg\"=\"Starting EventSource\" \"controller\"=\"ManagedClusterModule\" \"controllerGroup\"=\"hub.kmm.sigs.x-k8s.io\" \"controllerKind\"=\"ManagedClusterModule\" \"source\"=\"kind source: *v1.Build\" I0417 11:34:12.396469 1 controller.go:185] kmm-hub \"msg\"=\"Starting EventSource\" \"controller\"=\"ManagedClusterModule\" \"controllerGroup\"=\"hub.kmm.sigs.x-k8s.io\" \"controllerKind\"=\"ManagedClusterModule\" \"source\"=\"kind source: *v1.Job\" I0417 11:34:12.396522 1 controller.go:185] kmm-hub \"msg\"=\"Starting EventSource\" \"controller\"=\"ManagedClusterModule\" \"controllerGroup\"=\"hub.kmm.sigs.x-k8s.io\" \"controllerKind\"=\"ManagedClusterModule\" \"source\"=\"kind source: *v1.ManagedCluster\" I0417 11:34:12.396543 1 controller.go:193] kmm-hub \"msg\"=\"Starting Controller\" \"controller\"=\"ManagedClusterModule\" \"controllerGroup\"=\"hub.kmm.sigs.x-k8s.io\" \"controllerKind\"=\"ManagedClusterModule\" I0417 11:34:12.397175 1 controller.go:185] kmm-hub \"msg\"=\"Starting EventSource\" \"controller\"=\"imagestream\" \"controllerGroup\"=\"image.openshift.io\" \"controllerKind\"=\"ImageStream\" \"source\"=\"kind source: *v1.ImageStream\" I0417 11:34:12.397221 1 controller.go:193] kmm-hub \"msg\"=\"Starting Controller\" \"controller\"=\"imagestream\" \"controllerGroup\"=\"image.openshift.io\" \"controllerKind\"=\"ImageStream\" I0417 11:34:12.498335 1 filter.go:196] kmm-hub \"msg\"=\"Listing all ManagedClusterModules\" \"managedcluster\"=\"local-cluster\" I0417 11:34:12.498570 1 filter.go:205] kmm-hub \"msg\"=\"Listed ManagedClusterModules\" \"count\"=0 \"managedcluster\"=\"local-cluster\" I0417 11:34:12.498629 1 filter.go:238] kmm-hub \"msg\"=\"Adding reconciliation requests\" \"count\"=0 \"managedcluster\"=\"local-cluster\" I0417 11:34:12.498687 1 filter.go:196] kmm-hub \"msg\"=\"Listing all ManagedClusterModules\" \"managedcluster\"=\"sno1-0\" I0417 11:34:12.498750 1 filter.go:205] kmm-hub \"msg\"=\"Listed ManagedClusterModules\" \"count\"=0 \"managedcluster\"=\"sno1-0\" I0417 11:34:12.498801 1 filter.go:238] kmm-hub \"msg\"=\"Adding reconciliation requests\" \"count\"=0 \"managedcluster\"=\"sno1-0\" I0417 11:34:12.501947 1 controller.go:227] kmm-hub \"msg\"=\"Starting workers\" \"controller\"=\"imagestream\" \"controllerGroup\"=\"image.openshift.io\" \"controllerKind\"=\"ImageStream\" \"worker count\"=1 I0417 11:34:12.501948 1 controller.go:227] kmm-hub \"msg\"=\"Starting workers\" \"controller\"=\"ManagedClusterModule\" \"controllerGroup\"=\"hub.kmm.sigs.x-k8s.io\" \"controllerKind\"=\"ManagedClusterModule\" \"worker count\"=1 I0417 11:34:12.502285 1 imagestream_reconciler.go:50] kmm-hub \"msg\"=\"registered imagestream info mapping\" \"ImageStream\"={\"name\":\"driver-toolkit\",\"namespace\":\"openshift\"} \"controller\"=\"imagestream\" \"controllerGroup\"=\"image.openshift.io\" \"controllerKind\"=\"ImageStream\" \"dtkImage\"=\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:df42b4785a7a662b30da53bdb0d206120cf4d24b45674227b16051ba4b7c3934\" \"name\"=\"driver-toolkit\" \"namespace\"=\"openshift\" \"osImageVersion\"=\"412.86.202302211547-0\" \"reconcileID\"=\"e709ff0a-5664-4007-8270-49b5dff8bae9\"" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/specialized_hardware_and_driver_enablement/kernel-module-management-operator
Chapter 18. KubeControllerManager [operator.openshift.io/v1]
Chapter 18. KubeControllerManager [operator.openshift.io/v1] Description KubeControllerManager provides information to configure an operator to manage kube-controller-manager. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 18.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec is the specification of the desired behavior of the Kubernetes Controller Manager status object status is the most recently observed status of the Kubernetes Controller Manager 18.1.1. .spec Description spec is the specification of the desired behavior of the Kubernetes Controller Manager Type object Property Type Description failedRevisionLimit integer failedRevisionLimit is the number of failed static pod installer revisions to keep on disk and in the api -1 = unlimited, 0 or unset = 5 (default) forceRedeploymentReason string forceRedeploymentReason can be used to force the redeployment of the operand by providing a unique string. This provides a mechanism to kick a previously failed deployment and provide a reason why you think it will work this time instead of failing again on the same config. logLevel string logLevel is an intent based logging for an overall component. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for their operands. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". managementState string managementState indicates whether and how the operator should manage the component observedConfig `` observedConfig holds a sparse config that controller has observed from the cluster state. It exists in spec because it is an input to the level for the operator operatorLogLevel string operatorLogLevel is an intent based logging for the operator itself. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for themselves. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". succeededRevisionLimit integer succeededRevisionLimit is the number of successful static pod installer revisions to keep on disk and in the api -1 = unlimited, 0 or unset = 5 (default) unsupportedConfigOverrides `` unsupportedConfigOverrides holds a sparse config that will override any previously set options. It only needs to be the fields to override it will end up overlaying in the following order: 1. hardcoded defaults 2. observedConfig 3. unsupportedConfigOverrides useMoreSecureServiceCA boolean useMoreSecureServiceCA indicates that the service-ca.crt provided in SA token volumes should include only enough certificates to validate service serving certificates. Once set to true, it cannot be set to false. Even if someone finds a way to set it back to false, the service-ca.crt files that previously existed will only have the more secure content. 18.1.2. .status Description status is the most recently observed status of the Kubernetes Controller Manager Type object Property Type Description conditions array conditions is a list of conditions and their status conditions[] object OperatorCondition is just the standard condition fields. generations array generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. generations[] object GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. latestAvailableRevision integer latestAvailableRevision is the deploymentID of the most recent deployment latestAvailableRevisionReason string latestAvailableRevisionReason describe the detailed reason for the most recent deployment nodeStatuses array nodeStatuses track the deployment values and errors across individual nodes nodeStatuses[] object NodeStatus provides information about the current state of a particular node managed by this operator. observedGeneration integer observedGeneration is the last generation change you've dealt with readyReplicas integer readyReplicas indicates how many replicas are ready and at the desired state version string version is the level this availability applies to 18.1.3. .status.conditions Description conditions is a list of conditions and their status Type array 18.1.4. .status.conditions[] Description OperatorCondition is just the standard condition fields. Type object Property Type Description lastTransitionTime string message string reason string status string type string 18.1.5. .status.generations Description generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. Type array 18.1.6. .status.generations[] Description GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. Type object Property Type Description group string group is the group of the thing you're tracking hash string hash is an optional field set for resources without generation that are content sensitive like secrets and configmaps lastGeneration integer lastGeneration is the last generation of the workload controller involved name string name is the name of the thing you're tracking namespace string namespace is where the thing you're tracking is resource string resource is the resource type of the thing you're tracking 18.1.7. .status.nodeStatuses Description nodeStatuses track the deployment values and errors across individual nodes Type array 18.1.8. .status.nodeStatuses[] Description NodeStatus provides information about the current state of a particular node managed by this operator. Type object Property Type Description currentRevision integer currentRevision is the generation of the most recently successful deployment lastFailedCount integer lastFailedCount is how often the installer pod of the last failed revision failed. lastFailedReason string lastFailedReason is a machine readable failure reason string. lastFailedRevision integer lastFailedRevision is the generation of the deployment we tried and failed to deploy. lastFailedRevisionErrors array (string) lastFailedRevisionErrors is a list of human readable errors during the failed deployment referenced in lastFailedRevision. lastFailedTime string lastFailedTime is the time the last failed revision failed the last time. lastFallbackCount integer lastFallbackCount is how often a fallback to a revision happened. nodeName string nodeName is the name of the node targetRevision integer targetRevision is the generation of the deployment we're trying to apply 18.2. API endpoints The following API endpoints are available: /apis/operator.openshift.io/v1/kubecontrollermanagers DELETE : delete collection of KubeControllerManager GET : list objects of kind KubeControllerManager POST : create a KubeControllerManager /apis/operator.openshift.io/v1/kubecontrollermanagers/{name} DELETE : delete a KubeControllerManager GET : read the specified KubeControllerManager PATCH : partially update the specified KubeControllerManager PUT : replace the specified KubeControllerManager /apis/operator.openshift.io/v1/kubecontrollermanagers/{name}/status GET : read status of the specified KubeControllerManager PATCH : partially update status of the specified KubeControllerManager PUT : replace status of the specified KubeControllerManager 18.2.1. /apis/operator.openshift.io/v1/kubecontrollermanagers Table 18.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of KubeControllerManager Table 18.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 18.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind KubeControllerManager Table 18.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 18.5. HTTP responses HTTP code Reponse body 200 - OK KubeControllerManagerList schema 401 - Unauthorized Empty HTTP method POST Description create a KubeControllerManager Table 18.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 18.7. Body parameters Parameter Type Description body KubeControllerManager schema Table 18.8. HTTP responses HTTP code Reponse body 200 - OK KubeControllerManager schema 201 - Created KubeControllerManager schema 202 - Accepted KubeControllerManager schema 401 - Unauthorized Empty 18.2.2. /apis/operator.openshift.io/v1/kubecontrollermanagers/{name} Table 18.9. Global path parameters Parameter Type Description name string name of the KubeControllerManager Table 18.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a KubeControllerManager Table 18.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 18.12. Body parameters Parameter Type Description body DeleteOptions schema Table 18.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified KubeControllerManager Table 18.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 18.15. HTTP responses HTTP code Reponse body 200 - OK KubeControllerManager schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified KubeControllerManager Table 18.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 18.17. Body parameters Parameter Type Description body Patch schema Table 18.18. HTTP responses HTTP code Reponse body 200 - OK KubeControllerManager schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified KubeControllerManager Table 18.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 18.20. Body parameters Parameter Type Description body KubeControllerManager schema Table 18.21. HTTP responses HTTP code Reponse body 200 - OK KubeControllerManager schema 201 - Created KubeControllerManager schema 401 - Unauthorized Empty 18.2.3. /apis/operator.openshift.io/v1/kubecontrollermanagers/{name}/status Table 18.22. Global path parameters Parameter Type Description name string name of the KubeControllerManager Table 18.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified KubeControllerManager Table 18.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 18.25. HTTP responses HTTP code Reponse body 200 - OK KubeControllerManager schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified KubeControllerManager Table 18.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 18.27. Body parameters Parameter Type Description body Patch schema Table 18.28. HTTP responses HTTP code Reponse body 200 - OK KubeControllerManager schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified KubeControllerManager Table 18.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 18.30. Body parameters Parameter Type Description body KubeControllerManager schema Table 18.31. HTTP responses HTTP code Reponse body 200 - OK KubeControllerManager schema 201 - Created KubeControllerManager schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/operator_apis/kubecontrollermanager-operator-openshift-io-v1
Chapter 6. Preparing installation assets for iSCSI booting
Chapter 6. Preparing installation assets for iSCSI booting You can boot an OpenShift Container Platform cluster through Internet Small Computer System Interface (iSCSI) by using an ISO image generated by the Agent-based Installer. The following procedures describe how to prepare the necessary installation resources to boot from an iSCSI target. The assets you create in these procedures deploy a single-node OpenShift Container Platform installation. You can use these procedures as a basis and modify configurations according to your requirements. 6.1. Requirements for iSCSI booting The following configurations are necessary to enable iSCSI booting when using the Agent-based Installer: Dynamic Host Configuration Protocol (DHCP) must be configured. Static networking is not supported. You must create an additional network for iSCSI that is separate from the machine network of the cluster. The machine network is rebooted during cluster installation and cannot be used for the iSCSI session. If the host on which you are booting the agent ISO image also has an installed disk, it might be necessary to specify the iSCSI disk name in the rootDeviceHints parameter to ensure that it is chosen as the boot disk for the final Red Hat Enterprise Linux CoreOS (RHCOS) image. You can also use a diskless environment for iSCSI booting, in which case you do not need to set the rootDeviceHints parameter. Additional resources DHCP About root device hints 6.2. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . If you use a firewall or proxy, you configured it to allow the sites that your cluster requires access to. 6.3. Downloading the Agent-based Installer Use this procedure to download the Agent-based Installer and the CLI needed for your installation. Procedure Log in to the OpenShift Container Platform web console using your login credentials. Navigate to Datacenter . Click Run Agent-based Installer locally . Select the operating system and architecture for the OpenShift Installer and Command line interface . Click Download Installer to download and extract the install program. Download or copy the pull secret by clicking on Download pull secret or Copy pull secret . Click Download command-line tools and place the openshift-install binary in a directory that is on your PATH . 6.4. Creating the preferred configuration inputs Use this procedure to create the preferred configuration inputs used to create the agent image. Note Configuring the install-config.yaml and agent-config.yaml files is the preferred method for using the Agent-based Installer. Using GitOps ZTP manifests is optional. Procedure Install the nmstate dependency by running the following command: USD sudo dnf install /usr/bin/nmstatectl -y Place the openshift-install binary in a directory that is on your PATH. Create a directory to store the install configuration by running the following command: USD mkdir ~/<directory_name> Create the install-config.yaml file by running the following command: USD cat << EOF > ./<directory_name>/install-config.yaml apiVersion: v1 baseDomain: test.example.com compute: - architecture: amd64 1 hyperthreading: Enabled name: worker replicas: 0 controlPlane: architecture: amd64 hyperthreading: Enabled name: master replicas: 1 metadata: name: sno-cluster 2 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 192.168.0.0/16 networkType: OVNKubernetes 3 serviceNetwork: - 172.30.0.0/16 platform: 4 none: {} pullSecret: '<pull_secret>' 5 sshKey: '<ssh_pub_key>' 6 EOF 1 Specify the system architecture. Valid values are amd64 , arm64 , ppc64le , and s390x . If you are using the release image with the multi payload, you can install the cluster on different architectures such as arm64 , amd64 , s390x , and ppc64le . Otherwise, you can install the cluster only on the release architecture displayed in the output of the openshift-install version command. For more information, see "Verifying the supported architecture for installing an Agent-based Installer cluster". 2 Required. Specify your cluster name. 3 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 4 Specify your platform. Note For bare-metal platforms, host settings made in the platform section of the install-config.yaml file are used by default, unless they are overridden by configurations made in the agent-config.yaml file. 5 Specify your pull secret. 6 Specify your SSH public key. Note If you set the platform to vSphere or baremetal , you can configure IP address endpoints for cluster nodes in three ways: IPv4 IPv6 IPv4 and IPv6 in parallel (dual-stack) IPv6 is supported only on bare metal platforms. Example of dual-stack networking networking: clusterNetwork: - cidr: 172.21.0.0/16 hostPrefix: 23 - cidr: fd02::/48 hostPrefix: 64 machineNetwork: - cidr: 192.168.11.0/16 - cidr: 2001:DB8::/32 serviceNetwork: - 172.22.0.0/16 - fd03::/112 networkType: OVNKubernetes platform: baremetal: apiVIPs: - 192.168.11.3 - 2001:DB8::4 ingressVIPs: - 192.168.11.4 - 2001:DB8::5 Note When you use a disconnected mirror registry, you must add the certificate file that you created previously for your mirror registry to the additionalTrustBundle field of the install-config.yaml file. Create the agent-config.yaml file by running the following command: USD cat > agent-config.yaml << EOF apiVersion: v1beta1 kind: AgentConfig metadata: name: sno-cluster rendezvousIP: 192.168.111.80 1 hosts: 2 - hostname: master-0 3 interfaces: - name: eno1 macAddress: 00:ef:44:21:e6:a5 rootDeviceHints: 4 deviceName: /dev/sdb networkConfig: 5 interfaces: - name: eno1 type: ethernet state: up mac-address: 00:ef:44:21:e6:a5 ipv4: enabled: true address: - ip: 192.168.111.80 prefix-length: 23 dhcp: false dns-resolver: config: server: - 192.168.111.1 routes: config: - destination: 0.0.0.0/0 -hop-address: 192.168.111.2 -hop-interface: eno1 table-id: 254 minimalISO: true 6 EOF 1 This IP address is used to determine which node performs the bootstrapping process as well as running the assisted-service component. You must provide the rendezvous IP address when you do not specify at least one host's IP address in the networkConfig parameter. If this address is not provided, one IP address is selected from the provided hosts' networkConfig . 2 Optional: Host configuration. The number of hosts defined must not exceed the total number of hosts defined in the install-config.yaml file, which is the sum of the values of the compute.replicas and controlPlane.replicas parameters. 3 Optional: Overrides the hostname obtained from either the Dynamic Host Configuration Protocol (DHCP) or a reverse DNS lookup. Each host must have a unique hostname supplied by one of these methods. 4 Enables provisioning of the Red Hat Enterprise Linux CoreOS (RHCOS) image to a particular device. The installation program examines the devices in the order it discovers them, and compares the discovered values with the hint values. It uses the first discovered device that matches the hint value. 5 Optional: Configures the network interface of a host in NMState format. 6 Generates an ISO image without the rootfs image file, and instead provides details about where to pull the rootfs file from. You must set this parameter to true to enable iSCSI booting. Additional resources Deploying with dual-stack networking Configuring the install-config yaml file Configuring a three-node cluster About root device hints NMState state examples (NMState documentation) Optional: Creating additional manifest files Verifying the supported architecture for an Agent-based installation 6.5. Creating the installation files Use the following procedure to generate the ISO image and create an iPXE script to upload to your iSCSI target. Procedure Create the agent image by running the following command: USD openshift-install --dir <install_directory> agent create image Create an iPXE script by running the following command: USD cat << EOF > agent.ipxe !ipxe set initiator-iqn <iscsi_initiator_base>:\USD{hostname} sanboot --keep iscsi:<iscsi_network_subnet>.1::::<iscsi_target_base>:\USD{hostname} EOF where: <iscsi_initiator_base> Specifies the iSCSI initiator name on the host that is booting the ISO. This name can also be used by the iSCSI target. <iscsi_network_subnet> Specifies the IP address of the iSCSI target. <iscsi_target_base> Specifies the iSCSI target name. This name can be the same as the initiator name. Example Command USD cat << EOF > agent.ipxe !ipxe set initiator-iqn iqn.2023-01.com.example:\USD{hostname} sanboot --keep iscsi:192.168.45.1::::iqn.2023-01.com.example:\USD{hostname} EOF
[ "sudo dnf install /usr/bin/nmstatectl -y", "mkdir ~/<directory_name>", "cat << EOF > ./<directory_name>/install-config.yaml apiVersion: v1 baseDomain: test.example.com compute: - architecture: amd64 1 hyperthreading: Enabled name: worker replicas: 0 controlPlane: architecture: amd64 hyperthreading: Enabled name: master replicas: 1 metadata: name: sno-cluster 2 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 192.168.0.0/16 networkType: OVNKubernetes 3 serviceNetwork: - 172.30.0.0/16 platform: 4 none: {} pullSecret: '<pull_secret>' 5 sshKey: '<ssh_pub_key>' 6 EOF", "networking: clusterNetwork: - cidr: 172.21.0.0/16 hostPrefix: 23 - cidr: fd02::/48 hostPrefix: 64 machineNetwork: - cidr: 192.168.11.0/16 - cidr: 2001:DB8::/32 serviceNetwork: - 172.22.0.0/16 - fd03::/112 networkType: OVNKubernetes platform: baremetal: apiVIPs: - 192.168.11.3 - 2001:DB8::4 ingressVIPs: - 192.168.11.4 - 2001:DB8::5", "cat > agent-config.yaml << EOF apiVersion: v1beta1 kind: AgentConfig metadata: name: sno-cluster rendezvousIP: 192.168.111.80 1 hosts: 2 - hostname: master-0 3 interfaces: - name: eno1 macAddress: 00:ef:44:21:e6:a5 rootDeviceHints: 4 deviceName: /dev/sdb networkConfig: 5 interfaces: - name: eno1 type: ethernet state: up mac-address: 00:ef:44:21:e6:a5 ipv4: enabled: true address: - ip: 192.168.111.80 prefix-length: 23 dhcp: false dns-resolver: config: server: - 192.168.111.1 routes: config: - destination: 0.0.0.0/0 next-hop-address: 192.168.111.2 next-hop-interface: eno1 table-id: 254 minimalISO: true 6 EOF", "openshift-install --dir <install_directory> agent create image", "cat << EOF > agent.ipxe !ipxe set initiator-iqn <iscsi_initiator_base>:\\USD{hostname} sanboot --keep iscsi:<iscsi_network_subnet>.1::::<iscsi_target_base>:\\USD{hostname} EOF", "cat << EOF > agent.ipxe !ipxe set initiator-iqn iqn.2023-01.com.example:\\USD{hostname} sanboot --keep iscsi:192.168.45.1::::iqn.2023-01.com.example:\\USD{hostname} EOF" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/installing_an_on-premise_cluster_with_the_agent-based_installer/installing-using-iscsi
Replacing nodes
Replacing nodes Red Hat OpenShift Data Foundation 4.15 Instructions for how to safely replace a node in an OpenShift Data Foundation cluster. Red Hat Storage Documentation Team Abstract This document explains how to safely replace a node in a Red Hat OpenShift Data Foundation cluster.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/replacing_nodes/index
1.4. Installing Red Hat High Availability Add-On software
1.4. Installing Red Hat High Availability Add-On software To install Red Hat High Availability Add-On software, you must have entitlements for the software. If you are using the luci configuration GUI, you can let it install the cluster software. If you are using other tools to configure the cluster, secure and install the software as you would with Red Hat Enterprise Linux software. You can use the following yum install command to install the Red Hat High Availability Add-On software packages: Note that installing only the rgmanager will pull in all necessary dependencies to create an HA cluster from the HighAvailability channel. The lvm2-cluster and gfs2-utils packages are part of ResilientStorage channel and may not be needed by your site. Warning After you install the Red Hat High Availability Add-On packages, you should ensure that your software update preferences are set so that nothing is installed automatically. Installation on a running cluster can cause unexpected behaviors. Upgrading Red Hat High Availability Add-On Software It is possible to upgrade the cluster software on a given major release of Red Hat Enterprise Linux without taking the cluster out of production. Doing so requires disabling the cluster software on one host at a time, upgrading the software, and restarting the cluster software on that host. Shut down all cluster services on a single cluster node. For instructions on stopping cluster software on a node, see Section 9.1.2, "Stopping Cluster Software" . It may be desirable to manually relocate cluster-managed services and virtual machines off of the host prior to stopping rgmanager . Execute the yum update command to update installed packages. Reboot the cluster node or restart the cluster services manually. For instructions on starting cluster software on a node, see Section 9.1.1, "Starting Cluster Software" .
[ "yum install rgmanager lvm2-cluster gfs2-utils" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-install-clust-sw-CA
Appendix C. S3 common response status codes
Appendix C. S3 common response status codes The following table lists the valid common HTTP response status and its corresponding code. Table C.1. Response Status HTTP Status Response Code 100 Continue 200 Success 201 Created 202 Accepted 204 NoContent 206 Partial content 304 NotModified 400 InvalidArgument 400 InvalidDigest 400 BadDigest 400 InvalidBucketName 400 InvalidObjectName 400 UnresolvableGrantByEmailAddress 400 InvalidPart 400 InvalidPartOrder 400 RequestTimeout 400 EntityTooLarge 403 AccessDenied 403 UserSuspended 403 RequestTimeTooSkewed 404 NoSuchKey 404 NoSuchBucket 404 NoSuchUpload 405 MethodNotAllowed 408 RequestTimeout 409 BucketAlreadyExists 409 BucketNotEmpty 411 MissingContentLength 412 PreconditionFailed 416 InvalidRange 422 UnprocessableEntity 500 InternalError
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/developer_guide/s3-common-response-status-codes_dev
19.5. Adding a Remote Connection
19.5. Adding a Remote Connection This procedure covers how to set up a connection to a remote system using virt-manager . To create a new connection open the File menu and select the Add Connection menu item. The Add Connection wizard appears. Select the hypervisor. For Red Hat Enterprise Linux 7, systems select QEMU/KVM . Select Local for the local system or one of the remote connection options and click Connect . This example uses Remote tunnel over SSH, which works on default installations. For more information on configuring remote connections, see Chapter 18, Remote Management of Guests Figure 19.10. Add Connection Enter the root password for the selected host when prompted. A remote host is now connected and appears in the main virt-manager window. Figure 19.11. Remote host in the main virt-manager window
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-Managing_guests_with_the_Virtual_Machine_Manager_virt_manager-Adding_a_remote_connection
6.4. Bridged Networking
6.4. Bridged Networking Bridged networking (also known as network bridging or virtual network switching) is used to place virtual machine network interfaces on the same network as the physical interface. Bridges require minimal configuration and make a virtual machine appear on an existing network, which reduces management overhead and network complexity. As bridges contain few components and configuration variables, they provide a transparent setup which is straightforward to understand and troubleshoot, if required. Bridging can be configured in a virtualized environment using standard Red Hat Enterprise Linux tools, virt-manager , or libvirt , and is described in the following sections. However, even in a virtualized environment, bridges may be more easily created using the host operating system's networking tools. More information about this bridge creation method can be found in the Red Hat Enterprise Linux 7 Networking Guide . 6.4.1. Configuring Bridged Networking on a Red Hat Enterprise Linux 7 Host Bridged networking can be configured for virtual machines on a Red Hat Enterprise Linux host, independent of the virtualization management tools. This configuration is mainly recommended when the virtualization bridge is the host's only network interface, or is the host's management network interface. For instructions on configuring network bridging without using virtualization tools, see the Red Hat Enterprise Linux 7 Networking Guide . 6.4.2. Bridged Networking with Virtual Machine Manager This section provides instructions on creating a bridge from a host machine's interface to a guest virtual machine using virt-manager . Note Depending on your environment, setting up a bridge with libvirt tools in Red Hat Enterprise Linux 7 may require disabling Network Manager, which is not recommended by Red Hat. A bridge created with libvirt also requires libvirtd to be running for the bridge to maintain network connectivity. It is recommended to configure bridged networking on the physical Red Hat Enterprise Linux host as described in the Red Hat Enterprise Linux 7 Networking Guide , while using libvirt after bridge creation to add virtual machine interfaces to the bridges. Procedure 6.1. Creating a bridge with virt-manager From the virt-manager main menu, click Edit ⇒ Connection Details to open the Connection Details window. Click the Network Interfaces tab. Click the + at the bottom of the window to configure a new network interface. In the Interface type drop-down menu, select Bridge , and then click Forward to continue. Figure 6.1. Adding a bridge In the Name field, enter a name for the bridge, such as br0 . Select a Start mode from the drop-down menu. Choose from one of the following: none - deactivates the bridge onboot - activates the bridge on the guest virtual machine reboot hotplug - activates the bridge even if the guest virtual machine is running Check the Activate now check box to activate the bridge immediately. To configure either the IP settings or Bridge settings , click the appropriate Configure button. A separate window will open to specify the required settings. Make any necessary changes and click OK when done. Select the physical interface to connect to your virtual machines. If the interface is currently in use by another guest virtual machine, you will receive a warning message. Click Finish and the wizard closes, taking you back to the Connections menu. Figure 6.2. Adding a bridge Select the bridge to use, and click Apply to exit the wizard. To stop the interface, click the Stop Interface key. Once the bridge is stopped, to delete the interface, click the Delete Interface key. 6.4.3. Bridged Networking with libvirt Depending on your environment, setting up a bridge with libvirt in Red Hat Enterprise Linux 7 may require disabling Network Manager, which is not recommended by Red Hat. This also requires libvirtd to be running for the bridge to operate. It is recommended to configure bridged networking on the physical Red Hat Enterprise Linux host as described in the Red Hat Enterprise Linux 7 Networking Guide . Important libvirt is now able to take advantage of new kernel tunable parameters to manage host bridge forwarding database (FDB) entries, thus potentially improving system network performance when bridging multiple virtual machines. Set the macTableManager attribute of a network's <bridge> element to 'libvirt' in the host's XML configuration file: This will turn off learning (flood) mode on all bridge ports, and libvirt will add or remove entries to the FDB as necessary. Along with removing the overhead of learning the proper forwarding ports for MAC addresses, this also allows the kernel to disable promiscuous mode on the physical device that connects the bridge to the network, which further reduces overhead.
[ "<bridge name='br0' macTableManager='libvirt'/>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-Network_configuration-Bridged_networking